content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Unit Overview and Content Focus
9.1: Unit Overview and Content Focus
Created by: CK-12
This unit focuses on the concepts of stress, strength, deflection, point of failure, allowable strength, bending strength, and factor of safety. To model these concepts in action and allow the
students to conduct their own tests, the experiment tests the strength of popsicle sticks of varying sizes. The strength of a material is defined as the amount of stress that causes failure. Failure
occurs when the stress is larger than the strength, causing the popsicle stick to break under the given load (weight). Once the basic concepts of strength of materials are understood, the students
are tasked with an engineering project, the designing of a bridge that can withstand a maximum amount of stress before failing. The students test their designs using a computer-based bridge building
simulation, make necessary modifications, and then construct the actual bridge using popsicle sticks and glue. The bridges are tested by the teacher to determine their strength.
^*The entire unit is composed of four smaller lessons that can be utilized individually or together as part of a larger project.
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/book/CK-12-Modeling-and-Simulation-for-High-School-Teachers%253A-Principles%252C-Problems%252C-and-Lesson-Plans/r8/section/9.1/","timestamp":"2014-04-17T12:39:16Z","content_type":null,"content_length":"100292","record_id":"<urn:uuid:9e89801b-fe82-4edf-8bd1-3596856bbfe2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
the mean, median and the mode
Enjoy an ad free experience by logging in. Not a member yet?
New to the CF scene
Join Date
Apr 2004
Thanked 0 Times in 0 Posts
the mean, median and the mode
hi does any body now a pseudocode to calculate the following:
1) mean
2) median
3) mode
for any 10 random numbers
thanx for any help anyone is abel to provide
Senior Coder
Join Date
Aug 2002
Kansas City, Kansas
Thanked 2 Times in 2 Posts
Put the numbers into an array.
Mean: add all of the numbers together and divide by 10.
Median: array[4] + array[5] / 2
I'm not sure of a way to calculate the mode though this might help you out:
Senior Coder
Join Date
Jun 2002
Sydney, Australia
Thanked 1 Time in 1 Post
Well thinking back to my primary mathematics days mode is the most frequently occurring score ( at least I hope it is ) ...
Now there are a few ways you could go around it. Depending on how many scores you have, the easiest way might be to copy the array of scores into a 2d array and use the second dimension to tally
the number of scores of each and then select the one with the greatest scrore.
However the way I would do it would be to sort the scores first (asc or desc whichever your prefer) using whatever method you prefer.. I doubt you have more than say 20 scores so which sort you
pick will work fine.
After that counting is easy.
I'm suprised antonio didn't cite you on this but --
Your title could do with a few less exclaimation marks...it's hardly going to make us want to answer your post faster if even at all..
Edit: Removed the exclamation marks from it.
/// liorean
Last edited by liorean; 05-03-2004 at 10:42 AM.
Omnis mico antequam dominus Spookster!
Master Coder
Join Date
Jul 2002
Thanked 0 Times in 0 Posts
Not all distributions are uni-mode. If these random numbers are completely randon without dumplicates, then each number will occer once and there is no mode.
If you have 2 number each apearing twice or so, then you'll have a bi-modal distribution (2 modes).
So there's no straightforward answer to your question since we don't know anything about the inputvalues.
Master Coder
Join Date
Jul 2002
Thanked 0 Times in 0 Posts
So thay first need to be split up in groups of 10 elements? (according to your original question) So then you get
9,5,6,23,25,22,12,12,2,2 as the first group, which means you'll have 2 modi : 2 and 12
so your steps are :
- sort element
- count each values occurence
- sort frequecys descending
- report value with heighest frequence.
- check if the next value has the same frequency and if so, also report it. | {"url":"http://www.codingforums.com/computer-programming/37921-mean-median-mode.html?pda=1","timestamp":"2014-04-18T10:43:48Z","content_type":null,"content_length":"77992","record_id":"<urn:uuid:5e375ebc-ecf9-430a-8543-a2898aace759>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Proposal of new function: iteraxis()
Robert Kern robert.kern@gmail....
Thu Apr 25 14:10:32 CDT 2013
On Thu, Apr 25, 2013 at 6:54 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
> Hi,
> On Thu, Apr 25, 2013 at 10:42 AM, Robert Kern <robert.kern@gmail.com> wrote:
>> On Thu, Apr 25, 2013 at 6:30 PM, Matthew Brett <matthew.brett@gmail.com> wrote:
>>> So the decision has to be based on some estimate of:
>>> 1) Cost for adding a new function to the namespace
>>> 2) Benefit : some combination of: Likelihood of needing to iterate
>>> over arbitrary axis. Likelihood of not finding rollaxis / transpose as
>>> a solution to this. Increased likelihood of finding iteraxis in this
>>> situation.
>> 3) Comparison with other solutions that might obtain the same benefits
>> without the attendant costs: i.e. additional documentation in any
>> number of forms.
> Right, good point. That would also need to be weighted with the
> likelihood that people will find and read that documentation.
In my opinion, duplicating functionality under different aliases just
so people can supposedly find things without reading the documentation
is not a viable strategy for building out an API.
My suggestion is to start building out a "How do I ...?" section to
the User's Guide that answers small questions like this. "How do I
iterate over an arbitrary axis of an array?" should be sufficiently
discoverable. This is precisely the kind of problem that documentation
solves better than anything else. This is what we write documentation
for. Let's make use of it before trying something else. If we add such
a section, and still see many people not finding it, then we can
consider adding aliases.
Robert Kern
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2013-April/066377.html","timestamp":"2014-04-17T16:24:28Z","content_type":null,"content_length":"4807","record_id":"<urn:uuid:3d5d950e-e12d-4157-afec-49cde1b9ecb1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Department, Princeton University
High Energy Theory Seminar - Josephine Suh, MIT - "Entanglement growth during thermalization via gauge/gravity duality"
I will present results on scaling regimes in entanglement growth during thermalization in holographic field theories. These scaling regimes are derived from the time evolution of entanglement entropy
of spatial regions in a class of idealized thermalizing states in which a homogenous, isotropic density of energy is instantaneously injected into the vacuum of a CFT. The scaling formulae which
apply before saturation only depend on macroscopic properties of the final equilibrium state as well the injected energy density and are expected to apply to a broad class of thermalizing states in
strongly coupled field theories.
The time-dependent entanglement entropy is calculated using extremal surfaces in the gravity dual, and interestingly we find that ``critical extremal surfaces" inside the event horizon of the black
hole, dual to the finite equilibrium state, result in regimes of linear growth and ``memory loss" for large regions at long time scales. We propose a picture of entanglement propagation in which a
wave or ``entanglement tsunami" carries entanglement inward from the boundary of regions, and which captures universal linear growth as well as characteristics of saturation. We also conjecture some
bounds on the rate of entanglement growth in relativistic systems.
Location: PCTS Seminar Room
Date/Time: 01/20/14 at 2:30 pm - 01/20/14 at 3:30 pm
Category: High Energy Theory Seminar
Department: Physics | {"url":"http://www.princeton.edu/physics/events_archive/viewevent.xml?id=724","timestamp":"2014-04-18T03:06:54Z","content_type":null,"content_length":"10986","record_id":"<urn:uuid:53405dfc-43d9-40b4-a580-ad23675d8740>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00542-ip-10-147-4-33.ec2.internal.warc.gz"} |
INTEGERS: The Electronic Journal of Combinatorial
Number Theory, Volume 5(1)
(Year 2005)
Please select the file type you wish to view from the pop-up menu and then click on Retrieve. If you wish to view pdf files, you can download Adobe's free viewer from here.
Volume 5(1) (2005)
• A1: Restricted Permutations, Fibonacci Numbers, and k-generalized Fibonacci Numbers
□ Eric S. Egge and Toufik Mansour
• A2: On the Number of Ways of Writing t as a Product of Factorials
• A3: Two q-identities from the Theory of Fountains and Histograms Proved with a Tri-Diagonal Determinant
• A4: On the Distribution of Distances between the Points of Affine Curves over Finite Fields
• A5: The Akiyama-Tanigawa Transformation
□ Donatella Merlini, Renzo Sprugnoli, and M. Cecilia Verri
• A6: On Symmetric and Antisymmetric Balanced Binary Sequences
□ Shalom Eliahou and Delphine Hachez
• A7: Parity Theorems for Statistics on Permutations and Catalan Words
• A8: A Dual Approach to Triangle Sequences: A Multidimensional Continued Fraction Algorithm
□ Sami Assaf, Li-Chung Chen, Tegan Cheslack-Postava, Benjamin Cooper, Alexander Diesl, Thomas Garrity, Mathew Lepinski, and Adam Schuyler
• A9: Corrigendum to Article A1, Volume 2 (2002) (On the Problem of Uniqueness for the Maximum Stirling Number(s) of the Second Kind)
□ E. Rodney Canfield and Carl Pomerance
• A10: A Negative Answer to Two Questions about the Smallest Prime Numbers Having Given Digital Sums
• A11: Counting Rises, Levels, and Drops in Compositions
□ Silvia Heubach and Toufik Mansour
• A12: On Consecutive Integer Pairs With the Same Sum of Distinct Prime Divisors
□ Douglas E. Iannucci and Alexia S. Mintos
• A13: Powers of a Matrix and Combinatorial Identities (see also this article's Addendum)
□ J. Mc Laughlin and B. Sury
• A14: Complete Characterization of Substitution Invariant Sturmian Sequences
□ Peter Baláži, Zuzana Masáková, and Edita Pelantová
• A15: Generalization of a Binomial Identity of Simons
• A16: Relations Among Fourier Coefficients of Certain Eta Products
□ Shaun Cooper, Sanoli Gun, Michael Hirschhorn, and B. Ramakrishnan
• A17: An Application of Graph Pebbling to Zero-Sum Sequences in Abelian Groups
□ Shawn Elledge and Glenn H. Hurlbert
• A18: Two Very Short Proofs of a Combinatorial Identity
□ Roberto Anglani and Margherita Barile
• A19: Decimal Expansion of 1/p and Subgroup Sums
• A20: An Infinite Family of Overpartition Congruences Modulo 12
□ Michael D. Hirschhorn and James A. Sellers
• A21: On 2-Adic Orders of Stirling Numbers of the Second Kind
• A22: Some Results for Sums of the Inverses of Binomial Coefficients
□ Feng-Zhen Zhao and Tianming Wang
• A23: Finding Almost Squares II
• A24: A Note on Partitions and Compositions Defined by Inequalites
□ Sylvie Corteel, Carla D. Savage, and Herbert S. Wilf
• A25: A Question of Sierpinski on Triangular Numbers
• A26: Deriving Divisibility Theorems with Burnside's Theorem
□ Tyler J. Evans and Benjamin V. Holt
• A27: A Characterization of Minimal Zero-Sequences of Index One in Finite Cyclic Groups
□ Scott T. Chapman and William W. Smith
• A28: Periodic Multiplicative Algorithms of Selmer Type
• A29: Enumeration of Generalized Hook Partitions
□ Luca Ferrari and Simone Rinaldi
• A30: Pythagorean Primes and Palindromic Continued Fractions
□ Arthur T. Benjamin and Doron Zeilberger
• A31: An Infinite Family of Dual Sequence Identities
• A32: On the Degree of Regularity of Generalized van der Waerden Triples
□ Jacob Fox and Radoš Radoičić
• A33: A Common Generalization of Some Identities
□ Zhizheng Zhang and Jun Wang
Games Articles
• G1: Sumbers -- Sums of Ups and Downs
• G2: Partial Nim
• G3: Variations on a Theme of Euclid
• G4: New Temperatures in Domineering
□ Ajeet Shankar and Manu Sridharan
• G5: Taming the Wild in Impartial Combinatorial Games
• G6: Positions of Value *2 in Generalized Domineering and Chess
• G7: On Three-rowed Chomp
□ Andries E. Brouwer, Gábor Horváth, Ildikó Molnár-Sáska, and Csaba Szabó | {"url":"http://www.integers-ejcnt.org/vol5.html","timestamp":"2014-04-19T22:05:18Z","content_type":null,"content_length":"17845","record_id":"<urn:uuid:93a93b9c-309f-49d6-9d2e-b907fb906c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Semantics and Abstract Syntax
OWL Web Ontology Language
Semantics and Abstract Syntax
W3C Proposed Recommendation 15 December 2003
This version:
Latest version:
Previous version:
The normative form of this document is a compound HTML document.
This description of OWL, the Web Ontology Language being designed by the W3C Web Ontology Working Group, contains a high-level abstract syntax for both OWL DL and OWL Lite, sublanguages of OWL. A
model-theoretic semantics is given to provide a formal meaning for OWL ontologies written in this abstract syntax. A model-theoretic semantics in the form of an extension to the RDF semantics is also
given to provide a formal meaning for OWL ontologies as RDF graphs (OWL Full). A mapping from the abstract syntax to RDF graphs is given and the two model theories are shown to have the same
consequences on OWL ontologies that can be written in the abstract syntax.
Status of this document
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this
technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
Publication as a Proposed Recommendation does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is
inappropriate to cite this document as other than work in progress.
This draft is one of six parts of the Proposed Recommendation (PR) for OWL, the Web Ontology Language. It has been developed by the Web Ontology Working Group as part of the W3C Semantic Web Activity
(Activity Statement, Group Charter) for publication on 15 December 2003.
The design of OWL expressed in earlier versions of these documents has been widely reviewed and satisfies the Working Group's technical requirements. The Working Group has addressed all comments
received, making changes as necessary. During Candidate Recommendation, many implementations were reported, covering among them all features of the language. Changes to this document since the
Candidate Recommendation version are detailed in the change log.
W3C Advisory Committee Representatives are invited to submit their formal review per the instructions in the Call for Review. The public is invited to send comments to public-webont-comments@w3.org (
archive) and to participate in general discussion of related technology at www-rdf-logic@w3.org (archive). The review period extends until 12 January 2004.
The W3C maintains a list of any patent disclosures related to this work.
1. Introduction (Informative)
This document is one part of the specification of OWL, the Web Ontology Language. The OWL Overview [OWL Overview] describes each of the different documents in the specification and how they fit
This document contains several interrelated normative specifications of the several styles of OWL, the Web Ontology Language being produced by the W3C Web Ontology Working Group (WebOnt). First,
Section 2 contains a high-level, abstract syntax for both OWL Lite, a subset of OWL, and OWL DL, a fuller style of using OWL but one that still places some limitations on how OWL ontologies are
constructed. Eliminating these limitations results in the full OWL language, called OWL Full, which has the same syntax as RDF. The normative exchange syntax for OWL is RDF/XML [RDF Syntax]; the OWL
Reference document [OWL Reference] shows how the RDF syntax is used in OWL. A mapping from the OWL abstract syntax to RDF graphs [RDF Concepts] is, however, provided in Section 4.
This document contains two formal semantics for OWL. One of these semantics, defined in Section 3, is a direct, standard model-theoretic semantics for OWL ontologies written in the abstract syntax.
The other, defined in Section 5, is a vocabulary extension of the RDF semantics [RDF MT] that provides semantics for OWL ontologies in the form of RDF graphs. Two versions of this second semantics
are provided, one that corresponds more closely to the direct semantics (and is thus a semantics for OWL DL) and one that can be used in cases where classes need to be treated as individuals or other
situations that cannot be handled in the abstract syntax (and is thus a semantics for OWL Full). These two versions are actually very close, only differing in how they divide up the domain of
Appendix A contains a proof that the direct and RDFS-compatible semantics have the same consequences on OWL ontologies that correspond to abstract OWL ontologies that separate OWL individuals, OWL
classes, OWL properties, and the RDF, RDFS, and OWL structural vocabulary. Appendix A also contains the sketch of a proof that the entailments in the RDFS-compatible semantics for OWL Full include
all the entailments in the RDFS-compatible semantics for OWL DL. Finally a few examples of the various concepts defined in the document are presented in Appendix B.
This document is designed to be read by those interested in the technical details of OWL. It is not particularly intended for the casual reader, who should probably first read the OWL Guide [OWL
Guide]. Developers of parsers and other syntactic tools for OWL will be particularly interested in Sections 2 and 4. Developers of reasoners and other semantic tools for OWL will be particularly
interested in Sections 3 and 5.
2. Abstract Syntax (Normative)
The syntax for OWL in this section abstracts from exchange syntax for OWL and thus facilitates access to and evaluation of the language. This particular syntax has a frame-like style, where a
collection of information about a class or property is given in one large syntactic construct, instead of being divided into a number of atomic chunks (as in most Description Logics) or even being
divided into even more triples (as when writing OWL as RDF graphs [RDF Concepts]). The syntax used here is rather informal, even for an abstract syntax - in general the arguments of a construct
should be considered to be unordered wherever the order would not affect the meaning of the construct.
The abstract syntax is specified here by means of a version of Extended BNF, very similar to the EBNF notation used for XML [XML]. Terminals are quoted; non-terminals are bold and not quoted.
Alternatives are either separated by vertical bars (|) or are given in different productions. Components that can occur at most once are enclosed in square brackets ([…]); components that can occur
any number of times (including zero) are enclosed in braces ({…}). Whitespace is ignored in the productions here.
Names in the abstract syntax are RDF URI references, [RDF Concepts]. Often these names will be abbreviated into qualified names, using one of the following namespace names:
│Namespace name│ Namespace │
│rdf │http://www.w3.org/1999/02/22-rdf-syntax-ns# │
│rdfs │http://www.w3.org/2000/01/rdf-schema# │
│xsd │http://www.w3.org/2001/XMLSchema# │
│owl │http://www.w3.org/2002/07/owl# │
The meaning of each construct in the abstract syntax is informally described when it is introduced. The formal meaning of these constructs is given in Section 3 via a model-theoretic semantics.
While it is widely appreciated that all of the features in expressive languages such as OWL are important to some users, it is also understood that such languages may be daunting to some groups who
are trying to support a tool suite for the entire language. In order to provide a simpler target for implementation, a smaller language has been defined, called OWL Lite [OWL Overview]. This smaller
language was designed to provide functionality that is important in order to support Web applications, but that is missing in RDF Schema [RDF Schema]. (Note, however, that both OWL DL and OWL Lite do
not provide all of the feature of RDF Schema.) The abstract syntax is expressed both for this smaller language, called the OWL Lite abstract syntax here, and also for a fuller style of OWL, called
the OWL DL abstract syntax here.
The abstract syntax here is less general than the exchange syntax for OWL. In particular, it does not permit the construction of self-referential syntactic constructs. It is also intended for use in
cases where classes, properties, and individuals form disjoint collections. These are roughly the limitations required to make reasoning in OWL be decidable, and thus this abstract syntax should be
thought of a syntax for OWL DL.
NOTE: OWL Lite and OWL DL closely correspond to the description logics known as SHIF(D) and SHION(D), with some limitation on how datatypes are treated. The abstract syntax for OWL Lite doesn't
contain many of the common explicit constructors associated with SHIF(D), but the expressivity remains.
2.1. Ontologies
An OWL ontology in the abstract syntax contains a sequence of annotations, axioms, and facts. OWL ontologies can have a name. Annotations on OWL ontologies can be used to record authorship and other
information associated with an ontology, including imports references to other ontologies. The main content of an OWL ontology is carried in its axioms and facts, which provide information about
classes, properties, and individuals in the ontology.
ontology ::= 'Ontology(' [ ontologyID ] { directive } ')'
directive ::= 'Annotation(' ontologyPropertyID ontologyID ')'
| 'Annotation(' annotationPropertyID URIreference ')'
| 'Annotation(' annotationPropertyID dataLiteral ')'
| 'Annotation(' annotationPropertyID individual ')'
| axiom
| fact
Names of ontologies are used in the abstract syntax to carry the meaning associated with publishing an ontology on the Web. Names of ontologies are used in the abstract syntax to carry the meaning
associated with publishing an ontology on the Web. It is thus intended that the name of an ontology in the abstract syntax would be the URI where it could be found, although this is not part of the
formal meaning of OWL. Imports annotations, in effect, are directives to retrieve a Web document and treat it as an OWL ontology. However, most aspects of the Web, including missing, unavailable, and
time-varying documents, reside outside the OWL specification; all that is carried here is that a URI can be ``dereferenced'' into an OWL ontology. In several places in this document, therefore,
idealizations of this operational meaning for imports are used.
Ontologies incorporate information about classes, properties, and individuals, each of which can have an identifier which is a URI reference. Some of these identifiers need to be given axioms, as
detailed in Section 2.3.
datatypeID ::= URIreference
classID ::= URIreference
individualID ::= URIreference
ontologyID ::= URIreference
datavaluedPropertyID ::= URIreference
individualvaluedPropertyID ::= URIreference
annotationPropertyID ::= URIreference
ontologyPropertyID ::= URIreference
A URI reference cannot be both a datatypeID and a classID in an ontology. A URI reference also cannot be more than one of an datavaluedPropertyID, an individualvaluedPropertyID, an
annotationPropertyID, or an ontologyPropertyID in an ontology. However, a URI reference can be the identifier of a class or datatype as well as the identifier of a property as well as the identifier
of an individual, although the ontology cannot then be translated into an OWL DL RDF graph.
In OWL a datatype denotes the set of data values that is the value space for the datatype. Classes denote sets of individuals. Properties relate individuals to other information, and are divided into
four disjoint groups, data-valued properties, individual-valued properties, annotation properties, and ontology properties. Data-valued properties relate individuals to data values. Individual-valued
properties relate individuals to other individuals. Annotation properties are used to place annotations on individuals, class names, property names, and ontology names. Ontology properties relate
ontologies to other ontologies, in particular being used for importing information from other ontologies. Individual identifiers are used to refer to resources, and data literals are used to refer to
data values.
There are two built-in classes in OWL, they both use URI references in the OWL namespace, i.e., names starting with http://www.w3.org/2002/07/owl#, for which the namespace name owl is used here.
(Throughout this document qualified names will be used as abbreviations for URI references.) The class with identifier owl:Thing is the class of all individuals. The class with identifier owl:Nothing
is the empty class. Both classes are part of OWL Lite.
The following XML Schema datatypes [XML Schema Datatypes] can be used in OWL as built-in datatypes by means of the XML Schema canonical URI reference for the datatype, http://www.w3.org/2001/
XMLSchema#name, where name is the local name of the datatype: xsd:string, xsd:boolean, xsd:decimal, xsd:float, xsd:double, xsd:dateTime, xsd:time, xsd:date, xsd:gYearMonth, xsd:gYear, xsd:gMonthDay,
xsd:gDay, xsd:gMonth, xsd:hexBinary, xsd:base64Binary, xsd:anyURI, xsd:normalizedString, xsd:token, xsd:language, xsd:NMTOKEN, xsd:Name, xsd:NCName, xsd:integer, xsd:nonPositiveInteger,
xsd:negativeInteger, xsd:long, xsd:int, xsd:short, xsd:byte, xsd:nonNegativeInteger, xsd:unsignedLong, xsd:unsignedInt, xsd:unsignedShort, xsd:unsignedByte and xsd:positiveInteger. The other built-in
XML Schema datatypes are problematic for OWL, as discussed in Section 5.1 of RDF Semantics [RDF MT]. The built-in RDF datatype, rdf:XMLLiteral, is also an OWL built-in datatype. Because there is no
standard way to go from a URI reference to an XML Schema datatype in an XML Schema, there is no standard way to use user-defined XML Schema datatypes in OWL.
There are several built-in annotation properties in OWL, namely owl:versionInfo, rdfs:label, rdfs:comment, rdfs:seeAlso, and rdfs:isDefinedBy. In keeping with their definition in RDF, rdfs:label and
rdfs:comment can only be used with data literals.
There are also several built-in ontology properties; they are owl:imports, owl:priorVersion, owl:backwardCompatibleWith, and owl:incompatibleWith. Ontology annotations that use owl:imports have the
extra effect of importing the target ontology.
Many OWL constructs use annotations, which, just like annotation directives, are used to record information associated with some portion of the construct.
annotation ::= 'annotation(' annotationPropertyID URIreference ')'
| 'annotation(' annotationPropertyID dataLiteral ')'
| 'annotation(' annotationPropertyID individual ')'
2.2. Facts
There are two kinds of facts in the OWL abstract syntax.
The first kind of fact states information about a particular individual, in the form of classes that the individual belongs to plus properties and values of that individual. An individual can be
given an individualID that will denote that individual, and can be used to refer to that individual. However, an individual need not be given an individualID; such individuals are anonymous (blank in
RDF terms) and cannot be directly referred to elsewhere. The syntax here is set up to somewhat mirror RDF/XML syntax [RDF Syntax] without the use of rdf:nodeID.
fact ::= individual
individual ::= 'Individual(' [ individualID ] { annotation } { 'type(' type ')' } { value } ')'
value ::= 'value(' individualvaluedPropertyID individualID ')'
| 'value(' individualvaluedPropertyID individual ')'
| 'value(' datavaluedPropertyID dataLiteral ')'
Facts are the same in the OWL Lite and OWL DL abstract syntaxes, except for what can be a type. In OWL Lite, types can be class IDs or OWL Lite restrictions, see Section 2.3.1.2
type ::= classID
| restriction
In the OWL DL abstract syntax types can be general descriptions, which include class IDs and OWL Lite restrictions as well as other constructs
type ::= description
Data literals in the abstract syntax are either plain literals or typed literals. Plain literals consist of a Unicode string in Normal Form C and an optional language tag, as in RDF plain literals [
RDF Concepts]. Typed literals consist of a lexical representation and a URI reference, as in RDF typed literals [RDF Concepts].
dataLiteral ::= typedLiteral | plainLiteral
typedLiteral ::= lexicalForm^^URIreference
plainLiteral ::= lexicalForm | lexicalForm@languageTag
lexicalForm ::= as in RDF, a unicode string in normal form C
languageTag ::= as in RDF, an XML language tag
The second kind of fact is used to make individual identifiers be the same or pairwise distinct.
fact ::= 'SameIndividual(' individualID individualID {individualID} ')'
| 'DifferentIndividuals(' individualID individualID {individualID} ')'
2.3. Axioms
The biggest differences between the OWL Lite and OWL DL abstract syntaxes show up in the axioms, which are used to provide information about classes and properties. As it is the smaller language, OWL
Lite axioms are given first, in Section 2.3.1. The OWL DL axioms are given in Section 2.3.2. OWL DL axioms include OWL Lite axioms as special cases.
Axioms are used to associate class and property identifiers with either partial or complete specifications of their characteristics, and to give other information about classes and properties. Axioms
used to be called definitions, but they are not all definitions in the common sense of the term and thus a more neutral name has been chosen.
The syntax used here is meant to look somewhat like the syntax used in some frame systems. Each class axiom in OWL Lite contains a collection of more-general classes and a collection of local
property restrictions in the form of restriction constructs. The restriction construct gives the local range of a property, how many values are permitted, and/or a collection of required values. The
class is made either equivalent to or a subset of the intersection of these more-general classes and restrictions. In the OWL DL abstract syntax a class axiom contains a collection of descriptions,
which can be more-general classes, restrictions, sets of individuals, and boolean combinations of descriptions. Classes can also be specified by enumeration or be made equivalent or disjoint.
Properties can be equivalent to or sub-properties of others; can be made functional, inverse functional, symmetric, or transitive; and can be given global domains and ranges. However, most
information concerning properties is more naturally expressed in restrictions, which allow local range and cardinality information to be specified.
URI references used as class IDs or datatype IDs have to be differentiated, and so need an axiom, except for the built-in OWL classes and datatypes and rdfs:Literal. There can be more than one axiom
for a class or datatype. Properties used in an abstract syntax ontology have to be categorized as either data-valued or individual-valued or annotation properties. Properties thus also need an axiom
for this purpose, at least. If an ontology imports another ontology, the axioms in the imported ontology (and any ontologies it imports, and so on) can be used for these purposes.
2.3.1. OWL Lite Axioms
2.3.1.1. OWL Lite Class Axioms
In OWL Lite class axioms are used to state that a class is exactly equivalent to, for the modality complete, or a subclass of, for the modality partial, the conjunction of a collection of
superclasses and OWL Lite Restrictions. It is also possible to indicate that the use of a class is deprecated.
axiom ::= 'Class(' classID ['Deprecated'] modality { annotation } { super } ')'
modality ::= 'complete' | 'partial'
super ::= classID | restriction
In OWL Lite it is possible to state that two or more classes are equivalent.
axiom ::= 'EquivalentClasses(' classID classID { classID } ')'
Datatype axioms are simpler, only serving to say that a datatype ID is the ID of a datatype and to give annotations for the datatype.
axiom ::= 'Datatype(' datatypeID ['Deprecated'] { annotation } )'
2.3.1.2. OWL Lite Restrictions
Restrictions are used in OWL Lite class axioms to provide local constraints on properties in the class. Each allValuesFrom part of a restriction makes the constraint that all values of the property
for individuals in the class must belong to the specified class or datatype. Each someValuesFrom part makes the constraint that there must be at least one value for the property that belongs to the
specified class or datatype. The cardinality part says how many distinct values there are for the property for each individual in the class. In OWL Lite the only cardinalities allowed are 0 and 1.
See Section 2.3.1.3 for a limitation on which properties can have cardinality parts in restrictions.
restriction ::= 'restriction(' datavaluedPropertyID dataRestrictionComponent ')'
| 'restriction(' individualvaluedPropertyID individualRestrictionComponent ')'
dataRestrictionComponent ::= 'allValuesFrom(' dataRange ')'
| 'someValuesFrom(' dataRange ')'
| cardinality
individualRestrictionComponent ::= 'allValuesFrom(' classID ')'
| 'someValuesFrom(' classID ')'
| cardinality
cardinality ::= 'minCardinality(0)' | 'minCardinality(1)'
| 'maxCardinality(0)' | 'maxCardinality(1)'
| 'cardinality(0)' | 'cardinality(1)'
2.3.1.3. OWL Lite Property Axioms
Properties are also specified using a frame-like syntax. Data-valued properties relate individuals to data values, like integers. Individual-valued properties relate individuals to other individuals.
These two kinds of properties can be given super-properties, allowing the construction of a property hierarchy. It does not make sense to have an individual-valued property be a super-property of a
data-valued property, or vice versa. Data-valued and individual-valued properties can also be given domains and ranges. A domain for a property specifies which individuals are potential subjects of
statements that have the property as predicate, just as in RDFS. In OWL Lite the domains of properties are classes. There can be multiple domains, in which case only individuals that belong to all of
the domains are potential subjects. A range for a property specifies which individuals or data values can be objects of statements that have the property as predicate. Again, there can be multiple
ranges, in which case only individuals or data values that belong to all of the ranges are potential objects. In OWL Lite ranges for individual-valued properties are classes; ranges for data-valued
properties are datatypes.
Data-valued properties can be specified as (partial) functional, i.e., given an individual, there can be at most one relationship to a data value for that individual in the property.
Individual-valued properties can be specified to be the inverse of another property. Individual-valued properties can also be specified to be symmetric as well as partial functional, partial
inverse-functional, or transitive.
To preserve decidability of reasoning in OWL Lite, not all properties can have cardinality restrictions placed on them or be specified as functional or inverse-functional. An individual-valued
property is complex if 1/ it is specified as being functional or inverse-functional, 2/ there is some cardinality restriction that uses it, 3/ it has an inverse that is complex, or 4/ it has a
super-property that is complex. Complex properties cannot be specified as being transitive.
Annotation and ontology properties are much simpler than data-valued and individual-valued properties. The only information in axioms for them is annotations.
axiom ::= 'DatatypeProperty(' datavaluedPropertyID ['Deprecated'] { annotation }
{ 'super(' datavaluedPropertyID ')' } ['Functional']
{ 'domain(' classID' ')' } { 'range(' dataRange ')' } ')'
| 'ObjectProperty(' individualvaluedPropertyID ['Deprecated'] { annotation }
{ 'super(' individualvaluedPropertyID ')' }
[ 'inverseOf(' individualvaluedPropertyID ')' ] [ 'Symmetric' ]
[ 'Functional' | 'InverseFunctional' | 'Functional' 'InverseFunctional' | 'Transitive' ]
{ 'domain(' classID ')' } { 'range(' classID ')' } ')'
| 'AnnotationProperty(' annotationPropertyID { annotation } ')'
| 'OntologyProperty(' ontologyPropertyID { annotation } ')'
dataRange ::= datatypeID | 'rdfs:Literal'
The following axioms make several properties be equivalent, or make one property be a sub-property of another.
axiom ::= 'EquivalentProperties(' datavaluedPropertyID datavaluedPropertyID { datavaluedPropertyID } ')'
| 'SubPropertyOf(' datavaluedPropertyID datavaluedPropertyID ')'
| 'EquivalentProperties(' individualvaluedPropertyID individualvaluedPropertyID { individualvaluedPropertyID } ')'
| 'SubPropertyOf(' individualvaluedPropertyID individualvaluedPropertyID ')'
2.3.2. OWL DL Axioms
2.3.2.1. OWL DL Class Axioms
The OWL DL abstract syntax has more-general versions of the OWL Lite class axioms where superclasses, more-general restrictions, and boolean combinations of these are allowed. Together, these
constructs are called descriptions.
axiom ::= 'Class(' classID ['Deprecated'] modality { annotation } { description } ')'
modality ::= 'complete' | 'partial'
In the OWL DL abstract syntax it is also possible to make a class exactly consist of a certain set of individuals, as follows.
axiom ::= 'EnumeratedClass(' classID ['Deprecated'] { annotation } { individualID } ')'
Finally, in the OWL DL abstract syntax it is possible to require that a collection of descriptions be pairwise disjoint, or have the same instances, or that one description is a subclass of another.
Note that the last two of these axioms generalize, except for lack of annotation, the first kind of class axiom just above.
axiom ::= 'DisjointClasses(' description description { description } ')'
| 'EquivalentClasses(' description { description } ')'
| 'SubClassOf(' description description ')'
In OWL DL it is possible to have only one description in an EquivalentClasses construct. This allows ontologies to include descriptions that are not connected to anything, which is not semantically
useful, but makes allowances for less-than-optimal editing of ontologies.
Datatype axioms are the same as in OWL Lite.
axiom ::= 'Datatype(' datatypeID ['Deprecated'] { annotation } )'
2.3.2.2. OWL DL Descriptions
Descriptions in the OWL DL abstract syntax include class identifiers and restrictions. Descriptions can also be boolean combinations of other descriptions, and sets of individuals.
description ::= classID
| restriction
| 'unionOf(' { description } ')'
| 'intersectionOf(' { description } ')'
| 'complementOf(' description ')'
| 'oneOf(' { individualID } ')'
2.3.2.3. OWL DL Restrictions
Restrictions in the OWL DL abstract syntax generalize OWL Lite restrictions by allowing descriptions where classes are allowed in OWL Lite and allowing sets of data values as well as datatypes. The
combination of datatypes and sets of data values is called a data range. In the OWL DL abstract syntax, values can also be given for properties in classes. In addition, cardinalities are not
restricted to only 0 and 1.
restriction ::= 'restriction(' datavaluedPropertyID dataRestrictionComponent { dataRestrictionComponent } ')'
| 'restriction(' individualvaluedPropertyID individualRestrictionComponent { individualRestrictionComponent } ')'
dataRestrictionComponent ::= 'allValuesFrom(' dataRange ')'
| 'someValuesFrom(' dataRange ')'
| 'value(' dataLiteral ')'
| cardinality
individualRestrictionComponent ::= 'allValuesFrom(' description ')'
| 'someValuesFrom(' description ')'
| 'value(' individualID ')'
| cardinality
cardinality ::= 'minCardinality(' non-negative-integer ')'
| 'maxCardinality(' non-negative-integer ')'
| 'cardinality(' non-negative-integer ')'
A data range, used as the range of a data-valued property and in other places in the OWL DL abstract syntax, is either a datatype or a set of data values.
dataRange ::= datatypeID | 'rdfs:Literal'
| 'oneOf(' { dataLiteral } ')'
The OWL Lite limitations on which properties can have cardinality components in their restrictions are also present in OWL DL.
2.3.2.4. OWL DL Property Axioms
Property axioms in the OWL DL abstract syntax generalize OWL Lite property axioms by allowing descriptions in place of classes and data ranges in place of datatypes in domains and ranges.
axiom ::= 'DatatypeProperty(' datavaluedPropertyID ['Deprecated'] { annotation }
{ 'super(' datavaluedPropertyID ')'} ['Functional']
{ 'domain(' description ')' } { 'range(' dataRange ')' } ')'
| 'ObjectProperty(' individualvaluedPropertyID ['Deprecated'] { annotation }
{ 'super(' individualvaluedPropertyID ')' }
[ 'inverseOf(' individualvaluedPropertyID ')' ] [ 'Symmetric' ]
[ 'Functional' | 'InverseFunctional' | 'Functional' 'InverseFunctional' | 'Transitive' ]
{ 'domain(' description ')' } { 'range(' description ')' } ')'
| 'AnnotationProperty(' annotationPropertyID { annotation } ')'
| 'OntologyProperty(' ontologyPropertyID { annotation } ')'
The limitations on which properties can be specified to be functional or inverse-functional are also present in OWL DL.
As in OWL Lite, the following axioms make several properties be equivalent, or make one property be a sub-property of another.
axiom ::= 'EquivalentProperties(' datavaluedPropertyID datavaluedPropertyID { datavaluedPropertyID } ')'
| 'SubPropertyOf(' datavaluedPropertyID datavaluedPropertyID ')'
| 'EquivalentProperties(' individualvaluedPropertyID individualvaluedPropertyID
{ individualvaluedPropertyID } ')'
| 'SubPropertyOf(' individualvaluedPropertyID individualvaluedPropertyID ')'
3. Direct Model-Theoretic Semantics (Normative)
This model-theoretic semantics for OWL goes directly from ontologies in the OWL DL abstract syntax, which includes the OWL Lite abstract syntax, to a standard model theory. It is simpler than the
semantics in Section 5, which is a vocabulary extension of the RDFS semantics.
3.1. Vocabularies and Interpretations
The semantics here starts with the notion of a vocabulary. When considering an OWL ontology, the vocabulary must include all the URI references and literals in that ontology, as well as ontologies
that are imported by the ontology, but can include other URI references and literals as well.
In this section V[OP] will be the URI references for the built-in OWL ontology properties.
Definition: An OWL vocabulary V consists of a set of literals V[L] and seven sets of URI references, V[C], V[D], V[I], V[DP], V[IP], V[AP], and V[O]. In any vocabulary V[C] and V[D] are disjoint and
V[DP], V[IP], V[AP], and V[OP] are pairwise disjoint. V[C], the class names of a vocabulary, contains owl:Thing and owl:Nothing. V[D], the datatype names of a vocabulary, contains the URI references
for the built-in OWL datatypes and rdfs:Literal. V[AP], the annotation property names of a vocabulary, contains owl:versionInfo, rdfs:label, rdfs:comment, rdfs:seeAlso, and rdfs:isDefinedBy. V[IP],
the individual-valued property names of a vocabulary, V[DP], the data-valued property names of a vocabulary, and V[I], the individual names of a vocabulary, V[O], the ontology names of a vocabulary,
do not have any required members.
Definition: As in RDF, a datatype d is characterized by a lexical space, L(d), which is a set of Unicode strings; a value space, V(d); and a total mapping L2V(d) from the lexical space to the value
Definition: A datatype map D is a partial mapping from URI references to datatypes that maps xsd:string and xsd:integer to the appropriate XML Schema datatypes.
A datatype map may contain datatypes for the other built-in OWL datatypes. It may also contain other datatypes, but there is no provision in the OWL syntax for conveying what these datatypes are.
Definition: Let D be a datatype map. An Abstract OWL interpretation with respect to D with vocabulary V[L], V[C], V[D], V[I], V[DP], V[IP], V[AP], V[O] is a tuple of the form: I = <R, EC, ER, L, S,
LV> where (with P being the power set operator)
• R, the resources of I, is a non-empty set
• LV, the literal values of I, is a subset of R that contains the set of Unicode strings, the set of pairs of Unicode strings and language tags, and the value spaces for each datatype in D
• EC : V[C] → P(O)
• EC : V[D] → P(LV)
• ER : V[DP] → P(O×LV)
• ER : V[IP] → P(O×O)
• ER : V[AP] ∪ { rdf:type } → P(R×R)
• ER : V[OP] → P(R×R)
• L : TL → LV, where TL is the set of typed literals in V[L]
• S : V[I] ∪ V[C] ∪ V[D] ∪ V[DP] ∪ V[IP] ∪ V[AP] ∪ V[O] ∪ { owl:Ontology, owl:DeprecatedClass, owl:DeprecatedProperty } → R
• S(V[I]) ⊆ O
• If D(d') = d then EC(d') = V(d)
• If D(d') = d then L("v"^^d') ∈ V(d)
• If D(d') = d and v ∈ L(d) then L("v"^^d') = L2V(d)(v)
• If D(d') = d and v ∉ L(d) then L("v"^^d') ∈ R - LV
EC provides meaning for URI references that are used as OWL classes and datatypes. ER provides meaning for URI references that are used as OWL properties. (The property rdf:type is added to the
annotation properties so as to provide a meaning for deprecation, see below.)'' L provides meaning for typed literals. S provides meaning for URI references that are used to denote OWL individuals,
and helps provide meaning for annotations. Note that there are no interpretations that can satisfy all the requirements placed on badly-formed literals, i.e., one whose lexical form is invalid for
the datatype, such as 1.5^^xsd:integer.
S is extended to plain literals in V[L] by (essentially) mapping them onto themselves, i.e., S("l") = l for l a plain literal without a language tag and S("l"@t) = <l,t> for l a plain literal with a
language tag. S is extended to typed literals by using L, S(l) = L(l) for l a typed literal.
3.2. Interpreting Embedded Constructs
EC is extended to the syntactic constructs of descriptions, data ranges, individuals, values, and annotations as in the EC Extension Table.
EC Extension Table
│ Abstract Syntax │ Interpretation (value of EC) │
│complementOf(c) │O - EC(c) │
│unionOf(c[1] … c[n]) │EC(c[1]) ∪ … ∪ EC(c[n]) │
│intersectionOf(c[1] … c[n]) │EC(c[1]) ∩ … ∩ EC(c[n]) │
│oneOf(i[1] … i[n]), for i[j] individual IDs │{S(i[1]), …, S(i[n])} │
│oneOf(v[1] … v[n]), for v[j] literals │{S(v[1]), …, S(v[n])} │
│restriction(p x[1] … x[n]), for n > 1 │EC(restriction(p x[1])) ∩…∩EC(restriction(p x[n])) │
│restriction(p allValuesFrom(r)) │{x ∈ O | <x,y> ∈ ER(p) implies y ∈ EC(r)} │
│restriction(p someValuesFrom(e)) │{x ∈ O | ∃ <x,y> ∈ ER(p) ∧ y ∈ EC(e)} │
│restriction(p value(i)), for i an individual ID │{x ∈ O | <x,S(i)> ∈ ER(p)} │
│restriction(p value(v)), for v a literal │{x ∈ O | <x,S(v)> ∈ ER(p)} │
│restriction(p minCardinality(n)) │{x ∈ O | card({y ∈ O∪LV : <x,y> ∈ ER(p)}) ≥ n} │
│restriction(p maxCardinality(n)) │{x ∈ O | card({y ∈ O∪LV : <x,y> ∈ ER(p)}) ≤ n} │
│restriction(p cardinality(n)) │{x ∈ O | card({y ∈ O∪LV : <x,y> ∈ ER(p)}) = n} │
│Individual(annotation(p[1] o[1]) … annotation(p[k] o[k]) │EC(annotation(p[1] o[1])) ∩ … EC(annotation(p[k] o[k])) ∩ │
│type(c[1]) … type(c[m]) pv[1] … pv[n]) │EC(c[1]) ∩ … ∩ EC(c[m]) ∩ EC(pv[1]) ∩…∩ EC(pv[n]) │
│Individual(i annotation(p[1] o[1]) … annotation(p[k] o[k])│{S(i)} ∩ EC(annotation(p[1] o[1])) ∩ … EC(annotation(p[k] o[k])) ∩ │
│type(c[1]) … type(c[m]) pv[1] … pv[n]) │EC(c[1]) ∩ … ∩ EC(c[m]) ∩ EC(pv[1]) ∩…∩ EC(pv[n]) │
│value(p Individual(…)) │{x ∈ O | ∃ y∈EC(Individual(…)) : <x,y> ∈ ER(p)} │
│value(p id) for id an individual ID │{x ∈ O | <x,S(id)> ∈ ER(p) } │
│value(p v) for v a literal │{x ∈ O | <x,S(v)> ∈ ER(p) } │
│annotation(p o) for o a URI reference │{x ∈ R | <x,S(o)> ∈ ER(p) } │
│annotation(p Individual(…)) │{x ∈ R | ∃ y ∈ EC(Individual(…)) : <x,y> ∈ ER(p) } │
3.3. Interpreting Axioms and Facts
An Abstract OWL interpretation, I, satisfies OWL axioms and facts as given in Axiom and Fact Interpretation Table. In the table, optional parts of axioms and facts are given in square brackets ([…])
and have corresponding optional conditions, also given in square brackets.
Interpretation of Axioms and Facts
│ Directive │ Conditions on interpretations │
│Class(c [Deprecated] complete │[ <S(c),S(owl:DeprecatedClass)> ∈ ER(rdf:type) ] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) │S(c) ∈ EC(annotation(p[1] o[1])) … S(c) ∈ EC(annotation(p[k] o[k]))│
│descr[1] … descr[n]) │EC(c) = EC(descr[1]) ∩…∩ EC(descr[n]) │
│Class(c [Deprecated] partial │[ <S(c),S(owl:DeprecatedClass)> ∈ ER(rdf:type) ] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) │S(c) ∈ EC(annotation(p[1] o[1])) … S(c) ∈ EC(annotation(p[k] o[k]))│
│descr[1] … descr[n]) │EC(c) ⊆ EC(descr[1]) ∩…∩ EC(descr[n]) │
│EnumeratedClass(c [Deprecated] │[ <S(c),S(owl:DeprecatedClass)> ∈ ER(rdf:type) ] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) │S(c) ∈ EC(annotation(p[1] o[1])) … S(c) ∈ EC(annotation(p[k] o[k]))│
│i[1] … i[n]) │EC(c) = { S(i[1]), …, S(i[n]) } │
│Datatype(c [Deprecated] │[ <S(c),S(owl:DeprecatedClass)> ∈ ER(rdf:type) ] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) ) │S(c) ∈ EC(annotation(p[1] o[1])) … S(c) ∈ EC(annotation(p[k] o[k]))│
│ │EC(c) ⊆ LV │
│DisjointClasses(d[1] … d[n]) │EC(d[i]) ∩ EC(d[j]) = { } for 1 ≤ i < j ≤ n │
│EquivalentClasses(d[1] … d[n]) │EC(d[i]) = EC(d[j]) for 1 ≤ i < j ≤ n │
│SubClassOf(d[1] d[2]) │EC(d[1]) ⊆ EC(d[2]) │
│DatatypeProperty(p [Deprecated] │[ <S(c),S(owl:DeprecatedProperty)> ∈ ER(rdf:type) ] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) │S(p) ∈ EC(annotation(p[1] o[1])) … S(p) ∈ EC(annotation(p[k] o[k]))│
│super(s[1]) … super(s[n]) │ER(p) ⊆ O×LV ∩ ER(s[1]) ∩…∩ ER(s[n]) ∩ │
│domain(d[1]) … domain(d[n]) range(r[1]) … range(r[n]) │EC(d[1])×LV ∩…∩ EC(d[n])×LV ∩ O×EC(r[1]) ∩…∩ O×EC(r[n]) │
│[Functional]) │[ER(p) is functional] │
│ObjectProperty(p [Deprecated] │[ <S(c),S(owl:DeprecatedProperty)> ∈ ER(rdf:type)] │
│annotation(p[1] o[1]) … annotation(p[k] o[k]) │S(p) ∈ EC(annotation(p[1] o[1])) … S(p) ∈ EC(annotation(p[k] o[k]))│
│super(s[1]) … super(s[n]) │ER(p) ⊆ O×O ∩ ER(s[1]) ∩…∩ ER(s[n]) ∩ │
│domain(d[1]) … domain(d[n]) range(r[1]) … range(r[n]) │EC(d[1])×O ∩…∩ EC(d[n])×O ∩ O×EC(r[1]) ∩…∩ O×EC(r[n]) │
│[inverse(i)] [Symmetric] │[ER(p) is the inverse of ER(i)] [ER(p) is symmetric] │
│[Functional] [ InverseFunctional] │[ER(p) is functional] [ER(p) is inverse functional] │
│[Transitive]) │[ER(p) is transitive] │
│AnnotationProperty(p annotation(p[1] o[1]) … annotation(p[k] o[k]))│S(p) ∈ EC(annotation(p[1] o[1])) … S(p) ∈ EC(annotation(p[k] o[k]))│
│OntologyProperty(p annotation(p[1] o[1]) … annotation(p[k] o[k])) │S(p) ∈ EC(annotation(p[1] o[1])) … S(p) ∈ EC(annotation(p[k] o[k]))│
│EquivalentProperties(p[1] … p[n]) │ER(p[i]) = ER(p[j]) for 1 ≤ i < j ≤ n │
│SubPropertyOf(p[1] p[2]) │ER(p[1]) ⊆ ER(p[2]) │
│SameIndividual(i[1] … i[n]) │S(i[j]) = S(i[k]) for 1 ≤ j < k ≤ n │
│DifferentIndividuals(i[1] … i[n]) │S(i[j]) ≠ S(i[k]) for 1 ≤ j < k ≤ n │
│Individual([i] annotation(p[1] o[1]) … annotation(p[k] o[k]) │EC(Individual([i] annotation(p[1] o[1]) … annotation(p[k] o[k]) │
│type(c[1]) … type(c[m]) pv[1] … pv[n]) │type(c[1]) … type(c[m]) pv[1] … pv[n])) is nonempty │
3.4. Interpreting Ontologies
From Section 2, an OWL ontology can have annotations, which need their own semantic conditions. Aside from this local meaning, an owl:imports annotation also imports the contents of another OWL
ontology into the current ontology. The imported ontology is the one, if any, that has as name the argument of the imports construct. (This treatment of imports is divorced from Web issues. The
intended use of names for OWL ontologies is to make the name be the location of the ontology on the Web, but this is outside of this formal treatment.)
Definition: Let D be a datatype map. An Abstract OWL interpretation, I, with respect to D with vocabulary consisting of V[L], V[C], V[D], V[I], V[DP], V[IP], V[AP], V[O], satisfies an OWL ontology,
O, iff
1. each URI reference in O used as a class ID (datatype ID, individual ID, data-valued property ID, individual-valued property ID, annotation property ID, annotation ID, ontology ID) belongs to V[C]
(V[D], V[I], V[DP], V[IP], V[AP], V[O], respectively);
2. each literal in O belongs to V[L];
3. I satisfies each directive in O, except for Ontology Annotations;
4. there is some o ∈ R with <o,S(owl:Ontology)> ∈ ER(rdf:type) such that for each Ontology Annotation of the form Annotation(p v), <o,S(v)> ∈ ER(p) and that if O has name n, then S(n) = o; and
Definition: A collection of abstract OWL ontologies and axioms and facts is consistent with respect to datatype map D iff there is some interpretation I with respect to D such that I satisfies each
ontology and axiom and fact in the collection.
Definition: A collection O of abstract OWL ontologies and axioms and facts entails an abstract OWL ontology or axiom or fact O' with respect to a datatype map D if each interpretation with respect to
map D that satisfies each ontology and axiom and fact in O also satisfies O'.
4. Mapping to RDF Graphs (Normative)
This section of the document provides a mapping from the abstract syntax for OWL DL and OWL Lite given in Section 2 to the exchange syntax for OWL, namely RDF/XML [RDF Syntax]. This mapping (and its
inverse) provide the normative relationship between the abstract syntax and the exchange syntax. It is shown in Section 5 and Appendix A.1 that this mapping preserves the meaning of OWL DL
ontologies. Section 4.2 defines the OWL DL and OWL Lite dialects of OWL as those RDF graphs that are the result of mappings from abstract syntax ontologies.
The exchange syntax for OWL is RDF/XML [RDF Syntax], as specified in the OWL Reference Description [OWL Reference]. Further, the meaning of an OWL ontology in RDF/XML is determined only from the RDF
graph [RDF Concepts] that results from the RDF parsing of the RDF/XML document. Thus one way of translating an OWL ontology in abstract syntax form into the exchange syntax is by giving a
transformation of each directive into a collection of triples. As all OWL Lite constructs are special cases of constructs in the full abstract syntax, transformations are only provided for the OWL DL
OWL DL has semantics defined over the abstract syntax and a concrete syntax consisting of a subset of RDF graphs. Hence it is necessary to relate specific abstract syntax ontologies with specific RDF
/XML documents and their corresponding graphs. This section defines a many-to-many relationship between abstract syntax ontologies and RDF graphs. This is done using a set of nondeterministic mapping
rules. Thus to apply the semantics to a particular RDF graph it is necessary to find one of the abstract syntax ontologies that correspond with that graph under the mapping rules and to apply the
semantics to that abstract ontology. The mapping is designed so that any of the RDF graphs that correspond to a particular abstract ontology have the same meaning, as do any of the abstract
ontologies that correspond to a particular RDF graph. Moreover, since this process cannot be applied to RDF graphs that do not have corresponding abstract syntax forms, the mapping rules implicitly
define a set of graphs, which syntactically characterize OWL DL in RDF/XML.
The syntax for triples used here is the one used in the RDF semantics [RDF MT]. In this variant, qualified names are allowed. As detailed in the RDF semantics, to turn this syntax into the standard
one just expand the qualified names into URI references in the standard RDF manner by concatenating the namespace name with the local name, using the standard OWL namespaces.
4.1. Translation to RDF Graphs
The Transformation Table gives transformation rules that transform the abstract syntax to the OWL exchange syntax. In a few cases, notably for the DifferentIndividuals construct, there are different
transformation rules. In such cases either rule can be chosen, resulting in a non-deterministic translation. In a few other cases, notably for class and property axioms, there are triples that may or
may not be generated. These triples are indicated by flagging them with [opt]. In a couple of cases one of two triples must be generated. This is indicated by separating the triples with OR. These
non-determinisms allow the generation of more RDF Graphs.
The left column of the table gives a piece of abstract syntax (S); the center column gives its transformation into triples (T(S)); and the right column gives an identifier for the main node of the
transformation (M(T(S))), for syntactic constructs that can occur as pieces of directives. Repeating components are listed using ellipses, as in description[1] … description[n], this form allows easy
specification of the transformation for all values of n allowed in the syntax. Optional portions of the abstract syntax (enclosed in square brackets) are optional portions of the transformation
(signified by square brackets). As well, for any of the built-in OWL datatypes, built-in OWL classes, built-in OWL annotation properties, and built-in OWL ontology properties the first rdf:type
triple in the translation of it or any axiom for it is optional.
Some transformations in the table are for directives. Other transformations are for parts of directives. The last transformation is for sequences, which are not part of the abstract syntax per se.
This last transformation is used to make some of the other transformations more compact and easier to read.
For many directives these transformation rules call for the transformation of components of the directive using other transformation rules. When the transformation of a component is used as the
subject, predicate, or object of a triple, even an optional triple, the transformation of the component is part of the production (but only once per production) and the main node of that
transformation should be used in the triple.
Bnode identifiers here must be taken as local to each transformation, i.e., different identifiers should be used for each invocation of a transformation rule. Ontologies without a name are given a
bnode as their main node; ontologies with a name use that name as their main node; in both cases this node is referred to as O below.
Transformation to Triples
│Abstract Syntax (and sequences)│ Transformation - T(S) │ Main Node - M(T │
│ - S │ │ (S)) │
│Ontology(O directive[1] … │O rdf:type owl:Ontology . │ │
│directive[n]) │T(directive[1]) … T(directive[n]) │ │
│Ontology(directive[1] … │O rdf:type owl:Ontology . │ │
│directive[n]) │T(directive[1]) … T(directive[n]) │ │
│Annotation(ontologyPropertyID │ontologyPropertyID rdf:type owl:OntologyProperty . │ │
│URIreference) │O ontologyPropertyID URIreference . │ │
│ │URIreference rdf:type owl:Ontology . │ │
│Annotation(annotationPropertyID│annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│URIreference) │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│ │O annotationPropertyID URIreference . │ │
│Annotation(annotationPropertyID│annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│dataLiteral) │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│ │O annotationPropertyID T(dataLiteral) . │ │
│Annotation(annotationPropertyID│annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│individual) │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│ │O annotationPropertyID T(individual) . │ │
│rdfs:Literal │ │rdfs:Literal │
│datatypeID │datatypeID rdf:type rdfs:Datatype . │datatypeID │
│classID │classID rdf:type owl:Class . │classID │
│ │classID rdf:type rdfs:Class . [opt] │ │
│individualID │ │individualID │
│datavaluedPropertyID │datavaluedPropertyID rdf:type owl:DatatypeProperty . │datavalued- │
│ │datavaluedPropertyID rdf:type rdf:Property . [opt] │PropertyID │
│ │individualvaluedPropertyID rdf:type owl:ObjectProperty . [opt if there is a triple in the translation of the ontology that types the │individualvalued-│
│individualvaluedPropertyID │individualvaluedPropertyID as owl:InverseFunctionalProperty, owl:TransitiveProperty, or owl:SymmetricProperty] . │PropertyID │
│ │individualvaluedPropertyID rdf:type rdf:Property . [opt] │ │
│dataLiteral │dataLiteral │dataLiteral │
│Individual(iID annotation[1] … │iID T(annotation[1]) … iID T(annotation[m]) │ │
│annotation[m] │iID rdf:type T(type[1]) . … iID rdf:type T(type[n]) . │ │
│type(type[1])… type(type[n]) │iID T(pID[1]) T(v[1]) . … iID T(pID[k]) T(v[k]) . │iID │
│value(pID[1] v[1]) … value(pID │ │ │
│[k] v[k])) │ │ │
│Individual(annotation[1] … │_:x T(annotation[1]) … _:x T(annotation[m]) │ │
│annotation[m] │_:x rdf:type T(type[1]) . … _:x rdf:type T(type[n]) . │ │
│type(type[1])…type(type[n]) │_:x T(pID[1]) T(v[1]) . … _:x T(pID[k]) T(v[k]) . │_:x │
│value(pID[1] v[1]) … value(pID │ │ │
│[k] v[k])) │ │ │
│(With at least one type.) │ │ │
│Individual(annotation[1] … │_:x T(annotation[1]) … _:x T(annotation[m]) │ │
│annotation[m] │_:x rdf:type owl:Thing . │_:x │
│value(pID[1] v[1]) … value(pID │_:x T(pID[1]) T(v[1]) . … _:x T(pID[k]) T(v[k]) . │ │
│[k] v[k])) │ │ │
│SameIndividual(iID[1] … iID[n])│iID[i] owl:sameAs iID[i+1] . 1≤i<n │ │
│ │iID[i] owl:sameAs iID[j] . [opt] 1≤i≠j≤n │ │
│DifferentIndividuals(iID[1] … │iID[i] owl:differentFrom iID[j] . OR │ │
│iID[n]) │iID[j] owl:differentFrom iID[i] . 1≤i<j≤n │ │
│ │iID[j] owl:differentFrom iID[i] . [opt] 1≤i≠j≤n │ │
│DifferentIndividuals(iID[1] … │_:x rdf:type owl:AllDifferent . │ │
│iID[n]) │_:x owl:distinctMembers T(SEQ iID[1] … iID[n]) . │ │
│Class(classID [Deprecated] │classID rdf:type owl:Class . │ │
│partial │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│description[1] … description[n]│classID T(annotation[1]) … classID T(annotation[m]) │ │
│) │classID rdfs:subClassOf T(description[1]) . … │ │
│ │classID rdfs:subClassOf T(description[n]) . │ │
│Class(classID [Deprecated] │classID rdf:type owl:Class . │ │
│complete │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│description[1] … description[n]│classID T(annotation[1]) … classID T(annotation[m]) │ │
│) │classID owl:intersectionOf T(SEQ description[1]…description[n]) . │ │
│Class(classID [Deprecated] │classID rdf:type owl:Class . │ │
│complete │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│description) │classID T(annotation[1]) … classID T(annotation[m]) │ │
│ │classID owl:equivalentClass T(description) . │ │
│Class(classID [Deprecated] │classID rdf:type owl:Class . │ │
│complete │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│unionOf(description[1] … │classID T(annotation[1]) … classID T(annotation[m]) │ │
│description[n])) │classID owl:unionOf T(SEQ description[1]…description[n]) . │ │
│Class(classID [Deprecated] │classID rdf:type owl:Class . │ │
│complete │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│complementOf(description)) │classID T(annotation[1]) … classID T(annotation[m]) │ │
│ │classID owl:complementOf T(description) . │ │
│EnumeratedClass(classID [ │classID rdf:type owl:Class . │ │
│Deprecated] │classID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] │[classID rdf:type owl:DeprecatedClass .] │ │
│iID[1] … iID[n]) │classID T(annotation[1]) … classID T(annotation[m]) . │ │
│ │classID owl:oneOf T(SEQ iID[1]…iID[n]) . │ │
│DisjointClasses(description[1] │T(description[i]) owl:disjointWith T(description[j]) . OR │ │
│… description[n]) │T(description[j]) owl:disjointWith T(description[i]) . 1≤i<j≤n │ │
│ │T(description[i]) owl:disjointWith T(description[j]) . [opt] 1≤i≠j≤n │ │
│EquivalentClasses(description │T(description[i]) owl:equivalentClass T(description[j]) . │ │
│[1] … description[n]) │for all <i,j> in G where G is a set of pairs over {1,...,n}x{1,...,n} │ │
│ │that if interpreted as an undirected graph forms a connected graph for {1,...,n} │ │
│SubClassOf(description[1] │T(description[1]) rdfs:subClassOf T(description[2]) . │ │
│description[2]) │ │ │
│Datatype(datatypeID [Deprecated│datatypeID rdf:type rdfs:Datatype . │ │
│] │datatypeID rdf:type rdfs:Class . [opt] │ │
│annotation[1] … annotation[m] )│[datatypeID rdf:type owl:DeprecatedClass .] │ │
│ │datatypeID T(annotation[1]) … datatypeID T(annotation[m]) │ │
│unionOf(description[1] … │_:x rdf:type owl:Class . │ │
│description[n]) │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:unionOf T(SEQ description[1]…description[n]) . │ │
│intersectionOf(description[1] …│_:x rdf:type owl:Class . │ │
│description[n]) │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:intersectionOf T(SEQ description[1]…description[n]) . │ │
│complementOf(description) │_:x rdf:type owl:Class . │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:complementOf T(description) . │ │
│oneOf(iID[1] … iID[n]) │_:x rdf:type owl:Class . │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:oneOf T(SEQ iID[1]…iID[n]) . │ │
│oneOf(v[1] … v[n]) │_:x rdf:type owl:DataRange . │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:oneOf T(SEQ v[1] … v[n]) . │ │
│restriction(ID component[1] … │_:x rdf:type owl:Class . │ │
│component[n]) │_:x rdf:type rdfs:Class . [opt] │_:x │
│(With at least two components) │_:x owl:intersectionOf │ │
│ │T(SEQ(restriction(ID component[1]) … restriction(ID component[n]))) . │ │
│restriction(ID allValuesFrom( │_:x rdf:type owl:Restriction . │ │
│range)) │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:allValuesFrom T(range) . │ │
│restriction(ID someValuesFrom( │_:x rdf:type owl:Restriction . │ │
│required)) │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:someValuesFrom T(required) . │ │
│restriction(ID value(value)) │_:x rdf:type owl:Restriction . │ │
│ │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:hasValue T(value) . │ │
│restriction(ID minCardinality( │_:x rdf:type owl:Restriction . │ │
│min)) │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:minCardinality "min"^^xsd:nonNegativeInteger . │ │
│restriction(ID maxCardinality( │_:x rdf:type owl:Restriction . │ │
│max)) │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:maxCardinality "max"^^xsd:nonNegativeInteger . │ │
│restriction(ID cardinality(card│_:x rdf:type owl:Restriction . │ │
│)) │_:x rdf:type owl:Class . [opt] │ │
│ │_:x rdf:type rdfs:Class . [opt] │_:x │
│ │_:x owl:onProperty T(ID) . │ │
│ │_:x owl:cardinality "card"^^xsd:nonNegativeInteger . │ │
│DatatypeProperty(ID [Deprecated│ID rdf:type owl:DatatypeProperty . │ │
│] │ID rdf:type rdf:Property . [opt] │ │
│annotation[1] … annotation[m] │[ID rdf:type owl:DeprecatedProperty .] │ │
│super(super[1])… super(super[n]│ID T(annotation[1]) … ID T(annotation[m]) │ │
│) │ID rdfs:subPropertyOf T(super[1]) . … │ │
│domain(domain[1])… │ID rdfs:subPropertyOf T(super[n]) . │ │
│domain(domain[k]) │ID rdfs:domain T(domain[1]) . … │ │
│range(range[1])… │ID rdfs:domain T(domain[k]) . │ │
│range(range[h]) │ID rdfs:range T(range[1]) . … │ │
│[Functional]) │ID rdfs:range T(range[h]) . │ │
│ │[ID rdf:type owl:FunctionalProperty . ] │ │
│ObjectProperty(ID [Deprecated] │ID rdf:type owl:ObjectProperty . │ │
│annotation[1] … annotation[m] │[opt if one of the last three triples is included] │ │
│super(super[1])… super(super[n]│ID rdf:type rdf:Property . [opt] │ │
│) │[ID rdf:type owl:DeprecatedProperty .] │ │
│domain(domain[1])… │ID T(annotation[1]) … ID T(annotation[m]) │ │
│domain(domain[k]) │ID rdfs:subPropertyOf T(super[1]) . … │ │
│range(range[1])… │ID rdfs:subPropertyOf T(super[n]) . │ │
│range(range[h]) │ID rdfs:domain T(domain[1]) . … │ │
│[inverseOf(inverse)] │ID rdfs:domain T(domain[k]) . │ │
│[Functional | │ID rdfs:range T(range[1]) . … │ │
│InverseFunctional | │ID rdfs:range T(range[h]) . │ │
│Transitive]) │[ID owl:inverseOf T(inverse) .] │ │
│[Symmetric] │[ID rdf:type owl:FunctionalProperty . ] │ │
│ │[ID rdf:type owl:InverseFunctionalProperty . ] │ │
│ │[ID rdf:type owl:TransitiveProperty . ] │ │
│ │[ID rdf:type owl:SymmetricProperty . ] │ │
│AnnotationProperty(ID │ID rdf:type owl:AnnotationProperty . │ │
│annotation[1] … annotation[m]) │ID rdf:type rdf:Property . [opt] │ │
│ │ID T(annotation[1]) … ID T(annotation[m]) │ │
│OntologyProperty(ID │ID rdf:type owl:OntologyProperty . │ │
│annotation[1] … annotation[m]) │ID rdf:type rdf:Property . [opt] │ │
│ │ID T(annotation[1]) … ID T(annotation[m]) │ │
│EquivalentProperties(dvpID[1] …│T(dvpID[i]) owl:equivalentProperty T(dvpID[i+1]) . 1≤i<n │ │
│dvpID[n]) │ │ │
│SubPropertyOf(dvpID[1] dvpID[2]│T(dvpID[1]) rdfs:subPropertyOf T(dvpID[2]) . │ │
│) │ │ │
│EquivalentProperties(ivpID[1] …│T(ivpID[i]) owl:equivalentProperty T(ivpID[i+1]) . 1≤i<n │ │
│ivpID[n]) │ │ │
│SubPropertyOf(ivpID[1] ivpID[2]│T(ivpID[1]) rdfs:subPropertyOf T(ivpID[2]) . │ │
│) │ │ │
│annotation(annotationPropertyID│annotationPropertyID URIreference . │ │
│URIreference) │annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│ │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│annotation(annotationPropertyID│annotationPropertyID T(dataLiteral) . │ │
│dataLiteral) │annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│ │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│annotation(annotationPropertyID│annotationPropertyID T(individual) . │ │
│individual) │annotationPropertyID rdf:type owl:AnnotationProperty . │ │
│ │annotationPropertyID rdf:type rdf:Property . [opt] │ │
│SEQ │ │rdf:nil │
│SEQ item[1]…item[n] │_:l[1] rdf:type rdf:List . [opt] │ │
│ │_:l[1] rdf:first T(item[1]) . _:l[1] rdf:rest _:l[2] . │ │
│ │… │_:l[1] │
│ │_:ln rdf:type rdf:List . [opt] │ │
│ │_:ln rdf:first T(item[n]) . _:ln rdf:rest rdf:nil . │ │
This transformation is not injective, as several OWL abstract ontologies that do not use the above reserved vocabulary can map into equal RDF graphs. However, the only cases where this can happen is
with constructs that have the same meaning, such as several DisjointClasses axioms having the same effect as one larger one. It would be possible to define a canonical inverse transformation, if
4.2. Definition of OWL DL and OWL Lite Ontologies in RDF Graph Form
When considering OWL Lite and DL ontologies in RDF graph form, care must be taken to prevent the use of certain vocabulary as OWL classes, properties, or individuals. If this is not done the built-in
definitions or use of this vocabulary (in the RDF or OWL specification) would augment the information in the OWL ontology. Only some of the RDF vocabulary fits in this category, as some of the RDF
vocabulary, such as rdf:subject, is given little or no meaning by the RDF specifications and its use does not present problems, as long as the use is consistent with any meaning given by the RDF
Definition: The disallowed vocabulary from RDF is rdf:type, rdf:Property, rdf:nil, rdf:List, rdf:first, rdf:rest, rdfs:domain, rdfs:range, rdfs:Resource, rdfs:Datatype, rdfs:Class, rdfs:subClassOf,
rdfs:subPropertyOf, rdfs:member, rdfs:Container and rdfs:ContainerMembershipProperty. The disallowed vocabulary from OWL is owl:AllDifferent, owl:allValuesFrom, owl:AnnotationProperty,
owl:cardinality, owl:Class, owl:complementOf, owl:DataRange, owl:DatatypeProperty, owl:DeprecatedClass, owl:DeprecatedProperty, owl:differentFrom, owl:disjointWith, owl:distinctMembers,
owl:equivalentClass, owl:equivalentProperty, owl:FunctionalProperty, owl:hasValue, owl:intersectionOf, owl:InverseFunctionalProperty, owl:inverseOf, owl:maxCardinality, owl:minCardinality,
owl:ObjectProperty, owl:oneOf, owl:onProperty, owl:Ontology, owl:OntologyProperty, owl:Restriction, owl:sameAs, owl:someValuesFrom, owl:SymmetricProperty, owl:TransitiveProperty, and owl:unionOf. The
disallowed vocabulary is the union of the disallowed vocabulary from RDF and the disallowed vocabulary from OWL.
Definition: The class-only vocabulary is rdf:Statement, rdf:Seq, rdf:Bag, and rdf:Alt. The datatype-only vocabulary is the built-in OWL datatypes. The property-only vocabulary is rdf:subject,
rdf:predicate, rdf:object, and all the container membership properties, i.e., rdf:_1, rdf:_2, ….
Definition: A collection of OWL DL ontologies and axioms and facts in abstract syntax form, O, has a separated vocabulary if
1. the ontologies in O, taken together, do not use any URI reference as more than one of a class ID, a datatype ID, an individual ID, an individual-valued property ID, a data-valued property ID, an
annotation property ID, ontology property ID, or an ontology ID;
2. the ontologies in O, taken together, provide a type for every individual ID;
3. the ontologies in O only use the class-only vocabulary as class IDs; only use the datatype-only vocabulary as datatype IDs; only use rdfs:Literal in data ranges; only use the property-only
vocabulary as datavaluedProperty IDs, individualvaluedProperty IDs, or annotationProperty IDs; and do not mention any disallowed vocabulary.
Definition: An RDF graph is an OWL DL ontology in RDF graph form if it is equal (see below for a slight relaxation) to a result of the transformation to triples above of a collection of OWL DL
ontologies and axioms and facts in abstract syntax form that has a separated vocabulary. For the purposes of determining whether an RDF graph is an OWL DL ontology in RDF graph form, cardinality
restrictions are explicitly allowed to use constructions like "1"^^xsd:integer so long as the data value so encoded is a non-negative integer.
Definition: An RDF graph is an OWL Lite ontology in RDF graph form if it is as above except that the contents of O are OWL Lite ontologies or axioms or facts in abstract syntax form.
5. RDF-Compatible Model-Theoretic Semantics (Normative)
This model-theoretic semantics for OWL is an extension of the semantics defined in the RDF semantics [RDF MT], and defines the OWL semantic extension of RDF.
NOTE: There is a strong correspondence between the semantics for OWL DL defined in this section and the Direct Model-Theoretic Semantics defined in Section 3 (see Theorem 1 and Theorem 2 in Section
5.4). If, however, any conflict should ever arise between these two forms, then the Direct Model-Theoretic Semantics takes precedence.
5.1. The OWL and RDF universes
All of the OWL vocabulary is defined on the 'OWL universe', which is a division of part of the RDF universe into three parts, namely OWL individuals, classes, and properties. The class extension of
owl:Thing comprises the individuals of the OWL universe. The class extension of owl:Class comprises the classes of the OWL universe. The union of the class extensions of owl:ObjectProperty,
owl:DatatypeProperty, owl:AnnotationProperty, and owl:OntologyProperty comprises the properties of the OWL universe.
There are two different styles of using OWL. In the more free-wheeling style, called OWL Full, the three parts of the OWL universe are identified with their RDF counterparts, namely the class
extensions of rdfs:Resource, rdfs:Class, and rdf:Property. In OWL Full, as in RDF, elements of the OWL universe can be both an individual and a class, or, in fact, even an individual, a class, and a
property. In the more restrictive style, called OWL DL here, the three parts are different from their RDF counterparts and, moreover, pairwise disjoint. The more-restrictive OWL DL style gives up
some expressive power in return for decidability of entailment. Both styles of OWL provide entailments that are missing in a naive translation of the DAML+OIL model-theoretic semantics into the RDF
A major difference in practice between the two styles lies in the care that is required to ensure that URI references are actually in the appropriate part of the OWL universe. In OWL Full, no care is
needed. In OWL DL, localizing information must be provided for many of the URI references used. These localizing assumptions are all trivially true in OWL Full, and can also be ignored when one uses
the OWL abstract syntax, which corresponds closely to OWL DL. But when writing OWL DL in triples, however, close attention must be paid to which elements of the vocabulary belong to which part of the
OWL universe.
Throughout this section the OWL vocabulary will be the disallowed vocabulary from OWL plus the built-in classes, the built-in annotation properties, and the built-in ontology properties.
5.2. OWL Interpretations
The semantics of OWL DL and OWL Full are very similar. The common portion of their semantics is thus given first, and the differences left until later.
From the RDF semantics [RDF MT], for V a set of URI references and literals containing the RDF and RDFS vocabulary and D a datatype map, a D-interpretation of V is a tuple I = < R[I], P[I], EXT[I], S
[I], L[I], LV[I] >. R[I] is the domain of discourse or universe, i.e., a nonempty set that contains the denotations of URI references and literals in V. P[I] is a subset of R[I], the properties of I.
EXT[I] is used to give meaning to properties, and is a mapping from P[I] to P(R[I] × R[I]). S[I] is a mapping from URI references in V to their denotations in R[I]. L[I] is a mapping from typed
literals in V to their denotations in R[I]. LV[I] is a subset of R[I] that contains at least the set of Unicode strings, the set of pairs of Unicode strings and language tags, and the value spaces
for each datatype in D. The set of classes C[I] is defined as C[I] = { x ∈R[I] | <x,S[I](rdfs:Class)> ∈ EXT[I](S[I](rdf:type)) }, and the mapping CEXT[I] from C[I] to P(R[I]) is defined as CEXT[I](c)
= { x∈R[I] | <x,c>∈EXT[I](S[I](rdf:type)) }. D-interpretations must meet several other conditions, as detailed in the RDF semantics. For example, EXT[I](S[I](rdfs:subClassOf)) must be a transitive
relation and the class extension of all datatypes must be subsets of LV[I].
Definition: Let D be a datatype map that includes datatypes for xsd:integer and xsd:string. An OWL interpretation, I = < R[I], P[I], EXT[I], S[I], L[I], LV[I] >, of a vocabulary V, where V includes
the RDF and RDFS vocabularies and the OWL vocabulary, is a D-interpretation of V that satisfies all the constraints in this section.
Note: Elements of the OWL vocabulary that construct descriptions in the abstract syntax are given a different treatment from elements of the OWL vocabulary that correspond to (other) semantic
relationships. The former have only-if semantic conditions and comprehension principles; the latter have if-and-only-if semantic conditions. The only-if semantic conditions for the former are needed
to prevent semantic paradoxes and other problems with the semantics. The comprehension principles for the former and the if-and-only-if semantic conditions for the latter are needed so that useful
entailments are valid.
Conditions concerning the parts of the OWL universe and syntactic categories
│ │ then │ │
│ If E is ├────────────────┬─────────────────┬────────────────────┤ Note │
│ │ S[I](E)∈ │CEXT[I](S[I](E))=│ and │ │
│owl:Class │C[I] │IOC │IOC⊆C[I] │This defines IOC as the set of OWL classes. │
│rdfs:Datatype │ │IDC │IDC⊆C[I] │This defines IDC as the set of OWL datatypes. │
│owl:Restriction │C[I] │IOR │IOR⊆IOC │This defines IOR as the set of OWL restrictions. │
│owl:Thing │IOC │IOT │IOT⊆R[I] and IOT ≠ ∅│This defines IOT as the set of OWL individuals. │
│owl:Nothing │IOC │{} │ │ │
│rdfs:Literal │IDC │LV[I] │LV[I]⊆R[I] │ │
│owl:ObjectProperty │C[I] │IOOP │IOOP⊆P[I] │This defines IOOP as the set of OWL individual-valued properties. │
│owl:DatatypeProperty │C[I] │IODP │IODP⊆P[I] │This defines IODP as the set of OWL datatype properties. │
│owl:AnnotationProperty│C[I] │IOAP │IOAP⊆P[I] │This defines IOAP as the set of OWL annotation properties. │
│owl:OntologyProperty │C[I] │IOXP │IOXP⊆P[I] │This defines IOXP as the set of OWL ontology properties. │
│owl:Ontology │C[I] │IX │ │This defines IX as the set of OWL ontologies. │
│owl:AllDifferent │C[I] │IAD │ │ │
│rdf:List │ │IL │IL⊆R[I] │This defines IL as the set of OWL lists. │
│rdf:nil │IL │ │ │ │
│"l"^^d │CEXT[I](S[I](d))│ │S[I]("l"^^d) ∈ LV[I]│Typed literals are well-behaved in OWL. │
OWL built-in syntactic classes and properties
I(owl:FunctionalProperty), I(owl:InverseFunctionalProperty), I(owl:SymmetricProperty), I(owl:TransitiveProperty), I(owl:DeprecatedClass), and I(owl:DeprecatedProperty) are in C[I].
I(owl:equivalentClass), I(owl:disjointWith), I(owl:equivalentProperty), I(owl:inverseOf), I(owl:sameAs), I(owl:differentFrom), I(owl:complementOf), I(owl:unionOf), I(owl:intersectionOf), I(owl:oneOf
), I(owl:allValuesFrom), I(owl:onProperty), I(owl:someValuesFrom), I(owl:hasValue), I(owl:minCardinality), I(owl:maxCardinality), I(owl:cardinality), and I(owl:distinctMembers) are all in P[I].
I(owl:versionInfo), I(rdfs:label), I(rdfs:comment), I(rdfs:seeAlso), and I(rdfs:isDefinedBy) are all in IOAP. I(owl:imports), I(owl:priorVersion), I(owl:backwardCompatibleWith), and I(
owl:incompatibleWith), are all in IOXP.
Characteristics of OWL classes, datatypes, and properties
│ If E is │ then if e∈CEXT[I](S[I](E)) then │ Note │
│owl:Class │CEXT[I](e)⊆IOT │Instances of OWL classes are OWL individuals. │
│rdfs:Datatype │CEXT[I](e)⊆LV[I] │ │
│owl:DataRange │CEXT[I](e)⊆LV[I] │OWL dataranges are special kinds of datatypes. │
│owl:ObjectProperty │EXT[I](e)⊆IOT×IOT │Values for individual-valued properties are OWL individuals. │
│owl:DatatypeProperty │EXT[I](e)⊆IOT×LV[I] │Values for datatype properties are literal values. │
│owl:AnnotationProperty │EXT[I](e)⊆IOT×(IOT∪LV[I]) │Values for annotation properties are less unconstrained. │
│owl:OntologyProperty │EXT[I](e)⊆IX×IX │Ontology properties relate ontologies to other ontologies. │
│ If E is │ then c∈CEXT[I](S[I](E)) iff c∈IOOP∪IODP and │ Note │
│owl:FunctionalProperty │<x,y[1]>, <x,y[2]> ∈ EXT[I](c) implies y[1] = y[2]│Both individual-valued and datatype properties can be functional properties.│
│ If E is │ then c∈CEXT[I](S[I](E)) iff c∈IOOP and │ Note │
│owl:InverseFunctionalProperty│<x[1],y>, <x[2],y>∈EXT[I](c) implies x[1] = x[2] │Only individual-valued properties can be inverse functional properties. │
│owl:SymmetricProperty │<x,y> ∈ EXT[I](c) implies <y, x>∈EXT[I](c) │Only individual-valued properties can be symmetric properties. │
│owl:TransitiveProperty │<x,y>, <y,z>∈EXT[I](c) implies <x,z>∈EXT[I](c) │Only individual-valued properties can be transitive properties. │
If-and-only-if conditions for rdfs:subClassOf, rdfs:subPropertyOf, rdfs:domain, and rdfs:range
│ If E is │ then for │ <x,y>∈EXT[I](S[I](E)) iff │
│rdfs:subClassOf │x,y∈IOC │CEXT[I](x) ⊆ CEXT[I](y) │
│rdfs:subPropertyOf│x,y∈IOOP │EXT[I](x) ⊆ EXT[I](y) │
│rdfs:subPropertyOf│x,y∈IODP │EXT[I](x) ⊆ EXT[I](y) │
│rdfs:domain │x∈IOOP∪IODP,y∈IOC │<z,w>∈EXT[I](x) implies z∈CEXT[I](y) │
│rdfs:range │x∈IOOP∪IODP,y∈IOC∪IDC│<w,z>∈EXT[I](x) implies z∈CEXT[I](y) │
Characteristics of OWL vocabulary related to equivalence
│ If E is │ then <x,y>∈EXT[I](S[I](E)) iff │
│owl:equivalentClass │x,y∈IOC and CEXT[I](x)=CEXT[I](y) │
│owl:disjointWith │x,y∈IOC and CEXT[I](x)∩CEXT[I](y)={} │
│owl:equivalentProperty│x,y∈IOOP∪IODP and EXT[I](x) = EXT[I](y) │
│owl:inverseOf │x,y∈IOOP and <u,v>∈EXT[I](x) iff <v,u>∈EXT[I](y) │
│owl:sameAs │x = y │
│owl:differentFrom │x ≠ y │
Conditions on OWL vocabulary related to boolean combinations and sets
We will say that l[1] is a sequence of y[1],…,y[n] over C iff n=0 and l[1]=S[I](rdf:nil) or n>0 and l[1]∈IL and ∃ l[2], …, l[n] ∈ IL such that
<l[1],y[1]>∈EXT[I](S[I](rdf:first)), y[1]∈C, <l[1],l[2]>∈EXT[I](S[I](rdf:rest)), …,
<l[n],y[n]>∈EXT[I](S[I](rdf:first)), y[n]∈C, and <l[n],S[I](rdf:nil)>∈EXT[I](S[I](rdf:rest)).
│ If E is │ then <x,y>∈EXT[I](S[I](E)) iff │
│owl:complementOf │x,y∈ IOC and CEXT[I](x)=IOT-CEXT[I](y) │
│owl:unionOf │x∈IOC and y is a sequence of y[1],…y[n] over IOC and CEXT[I](x) = CEXT[I](y[1])∪…∪CEXT[I](y[n]) │
│owl:intersectionOf│x∈IOC and y is a sequence of y[1],…y[n] over IOC and CEXT[I](x) = CEXT[I](y[1])∩…∩CEXT[I](y[n]) │
│owl:oneOf │x∈C[I] and y is a sequence of y[1],…y[n] over IOT or over LV[I] and CEXT[I](x) = {y[1],..., y[n]} │
Further conditions on owl:oneOf
│ If E is │ and │then if <x,l>∈EXT[I](S[I](E)) then│
│owl:oneOf│l is a sequence of y[1],…y[n] over LV[I] │x∈IDC │
│owl:oneOf│l is a sequence of y[1],…y[n] over IOT │x∈IOC │
Conditions on OWL restrictions
│ If │ then x∈IOR, y∈IOC∪IDC, p∈IOOP∪IODP, and CEXT[I](x) = │
│<x,y>∈EXT[I](S[I](owl:allValuesFrom))) ∧ │{u∈IOT | <u,v>∈EXT[I](p) implies v∈CEXT[I](y) } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
│<x,y>∈EXT[I](S[I](owl:someValuesFrom))) ∧│{u∈IOT | ∃ <u,v>∈EXT[I](p) such that v∈CEXT[I](y) } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
│ If │ then x∈IOR, y∈IOT∪LV[I], p∈IOOP∪IODP, and CEXT[I](x) = │
│<x,y>∈EXT[I](S[I](owl:hasValue))) ∧ │{u∈IOT | <u, y>∈EXT[I](p) } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
│ If │then x∈IOR, y∈LV[I], y is a non-negative integer, p∈IOOP∪IODP, and CEXT[I](x) = │
│<x,y>∈EXT[I](S[I](owl:minCardinality))) ∧│{u∈IOT | card({v ∈ IOT ∪ LV : <u,v>∈EXT[I](p)}) ≥ y } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
│<x,y>∈EXT[I](S[I](owl:maxCardinality))) ∧│{u∈IOT | card({v ∈ IOT ∪ LV : <u,v>∈EXT[I](p)}) ≤ y } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
│<x,y>∈EXT[I](S[I](owl:cardinality))) ∧ │{u∈IOT | card({v ∈ IOT ∪ LV : <u,v>∈EXT[I](p)}) = y } │
│<x,p>∈EXT[I](S[I](owl:onProperty))) │ │
Comprehension conditions (principles)
The first two comprehension conditions require the existence of the finite sequences that are used in some OWL constructs. The third comprehension condition requires the existence of instances of
owl:AllDifferent. The remaining comprehension conditions require the existence of the appropriate OWL descriptions and data ranges.
│ If there exists │ then there exists l[1],…,l[n] ∈ IL with │
│x[1], …, x[n] ∈ IOC │<l[1],x[1]> ∈ EXT[I](S[I](rdf:first)), <l[1],l[2]> ∈ EXT[I](S[I](rdf:rest)), … │
│ │<l[n],x[n]> ∈ EXT[I](S[I](rdf:first)), <l[n],S[I](rdf:nil)> ∈ EXT[I](S[I](rdf:rest)) │
│x[1], …, x[n] ∈ IOT∪LV[I] │<l[1],x[1]> ∈ EXT[I](S[I](rdf:first)), <l[1],l[2]> ∈ EXT[I](S[I](rdf:rest)), … │
│ │<l[n],x[n]> ∈ EXT[I](S[I](rdf:first)), <l[n],S[I](rdf:nil)> ∈ EXT[I](S[I](rdf:rest)) │
│ If there exists │ then there exists y with │
│l, a sequence of x[1],…,x[n] over IOT │y∈IAD, <y,l>∈EXT[I](S[I](owl:distinctMembers)) │
│with x[i]≠x[j] for 1≤i<j≤n │ │
│ If there exists │ then there exists y with │
│l, a sequence of x[1],…,x[n] over IOC │y∈IOC, <y,l> ∈ EXT[I](S[I](owl:unionOf)) │
│l, a sequence of x[1],…,x[n] over IOC │y∈IOC, <y,l> ∈ EXT[I](S[I](owl:intersectionOf)) │
│l, a sequence of x[1],…,x[n] over IOT │y∈IOC, <y,l> ∈ EXT[I](S[I](owl:oneOf)) │
│l, a sequence of x[1],…,x[n] over LV[I] │y∈IDC, <y,l> ∈ EXT[I](S[I](owl:oneOf)) │
│ If there exists │ then there exists y ∈ IOC with │
│x ∈ IOC │<y,x> ∈ EXT[I](S[I](owl:complementOf)) │
│ If there exists │ then there exists y ∈ IOR with │
│x ∈ IOOP∪IODP ∧ w ∈ IOC ∪ IDC │<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:allValuesFrom)) │
│x ∈ IOOP∪IODP ∧ w ∈ IOC ∪ IDC │<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:someValuesFrom)) │
│x ∈ IOOP∪IODP ∧ w ∈ IOT ∪ LV[I] │<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:hasValue)) │
│x ∈ IOOP∪IODP ∧ w ∈ LV[I] ∧ w is a non-negative integer│<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:minCardinality)) │
│x ∈ IOOP∪IODP ∧ w ∈ LV[I] ∧ w is a non-negative integer│<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:maxCardinality)) │
│x ∈ IOOP∪IODP ∧ w ∈ LV[I] ∧ w is a non-negative integer│<y,x> ∈ EXT[I](S[I](owl:onProperty)) ∧ │
│ │<y,w> ∈ EXT[I](S[I](owl:cardinality)) │
5.3. OWL Full
OWL Full augments the common conditions with conditions that force the parts of the OWL universe to be the same as their analogues in RDF. These new conditions strongly interact with the common
conditions. For example, because in OWL Full IOT is the entire RDF domain of discourse, the second comprehension condition for lists generates lists of any kind, including lists of lists.
Definition: An OWL Full interpretation of a vocabulary V is an OWL interpretation that satisfies the following conditions (recall that an OWL interpretation is with respect to a datatype map).
│IOT = R[I] │
│IOOP = P[I]│
│IOC = C[I] │
Definition: Let K be a collection of RDF graphs. K is imports closed iff for every triple in any element of K of the form x owl:imports u . then K contains a graph that is the result of the RDF
processing of the RDF/XML document, if any, accessible at u into an RDF graph. The imports closure of a collection of RDF graphs is the smallest import-closed collection of RDF graphs containing the
Definitions: Let K and Q be collections of RDF graphs and D be a datatype map. Then K OWL Full entails Q with respect to D iff every OWL Full interpretation with respect to D (of any vocabulary V
that includes the RDF and RDFS vocabularies and the OWL vocabulary) that satisfies all the RDF graphs in K also satisfies all the RDF graphs in Q. K is OWL Full consistent iff there is some OWL Full
interpretation that satisfies all the RDF graphs in K.
5.4. OWL DL
OWL DL augments the conditions of Section 5.2 with a separation of the domain of discourse into several disjoint parts. This separation has two consequences. First, the OWL portion of the domain of
discourse becomes standard first-order, in that predicates (classes and properties) and individuals are disjoint. Second, the OWL portion of a OWL DL interpretation can be viewed as a Description
Logic interpretation for a particular expressive Description Logic.
Definition: A OWL DL interpretation of a vocabulary V is an OWL interpretation that satisfies the following conditions (recall that an OWL interpretation is with respect to a datatype map).
│LV[I], IOT, IOC, IDC, IOOP, IODP, IOAP, IOXP, IL, and IX are all pairwise disjoint. │
│For v in the disallowed vocabulary (Section 4.2), S[I](v) ∈ R[I] - (LV[I]∪IOT∪IOC∪IDC∪IOOP∪IODP∪IOAP∪IOXP∪IL∪IX).│
Entailment in OWL DL is defined similarly to entailment in OWL Full.
Definitions: Let K and Q be collections of RDF graphs and D be a datatype map. Then K OWL DL entails Q with respect to D iff every OWL DL interpretation with respect to D (of any vocabulary V that
includes the RDF and RDFS vocabularies and the OWL vocabulary) that satisfies all the RDF graphs in K also satisfies all the RDF graphs in Q. K is OWL DL consistent iff there is some OWL DL
interpretation that satisfies all the RDF graphs in K.
There is a strong correspondence between the Direct Model-Theoretic Semantics and the OWL DL semantics (but in case of any conflict, the Direct Model-Theoretic Semantics takes precedence—see the Note
at the beginning of Section 5). Basically, an ontology that could be written in the abstract syntax OWL DL entails another exactly when it entails the other in the direct semantics. There are a
number of complications to this basic story having to do with splitting up the vocabulary so that, for example, concepts, properties, and individuals do not interfere, and arranging that imports
works the same.
For the correspondence to be valid there has to be some connection between an ontology in the abstract syntax with a particular name and the document available on the Web at that URI. This connection
is outside the semantics here, and so must be specially arranged. This connection is also only an idealization of the Web, as it ignores temporal and transport aspects of the Web.
Definition: Let T be the mapping from the abstract syntax to RDF graphs from Section 4.1. Let O be a collection of OWL DL ontologies and axioms and facts in abstract syntax form. O is said to be
imports closed iff for any URI, u, in an imports directive in any ontology in O the RDF parsing of the document accessible on the Web at u results in T(K), where K is the ontology in O with name u.
Theorem 1: Let O and O' be collections of OWL DL ontologies and axioms and facts in abstract syntax form that are imports closed, such that their union has a separated vocabulary (Section 4.2). Given
a datatype map D that maps xsd:string and xsd:integer to the appropriate XML Schema datatypes and that includes the RDF mapping for rdf:XMLLiteral, then O entails O' with respect to D if and only if
the translation (Section 4.1) of O OWL DL entails the translation of O' with respect to D.
The proof is contained in Appendix A.1.
A simple corollary of this is that, given a datatype map D that maps xsd:string and xsd:integer to the appropriate XML Schema datatypes and that includes the RDF mapping for rdf:XMLLiteral, O is
consistent with respect to D if and only if the translation of O is consistent with respect to D.
There is also a correspondence between OWL DL entailment and OWL Full entailment.
Theorem 2: Let O and O' be collections of OWL DL ontologies and axioms and facts in abstract syntax form that are imports closed, such that their union has a separated vocabulary (Section 4.2). Given
a datatype map D that maps xsd:string and xsd:integer to the appropriate XML Schema datatypes and that includes the RDF mapping for rdf:XMLLiteral, then the translation of O OWL Full entails the
translation of O' with respect to D if the translation of O OWL DL entails the translation of O' with respect to D. A sketch of the proof is contained in Appendix A.2.
Appendix A. Proofs (Informative)
This appendix contains proofs of theorems contained in Section 5 of the document.
Nomenclature: Throughout this appendix D will be a datatype map (Section 3.1) containing datatypes for all the built-in OWL datatypes and rdf:XMLLiteral; T will be the mapping from abstract OWL
ontologies to RDF graphs from Section 4.1; and VB will be the built-in OWL vocabulary. Also, the plain literals in a vocabulary will not be mentioned.
Note: Throughout this appendix all interpretations are with respect to the datatype map D. Thus all results are with respect to D. Some of the obvious details of the constructions are missing.
A.1 Correspondence between Abstract OWL and OWL DL
This section shows that the two semantics, the direct model theory for abstract OWL ontologies from Section 3, here called the direct model theory, and the OWL DL semantics from Section 5, here
called the OWL DL model theory, correspond on certain OWL ontologies.
Definition: Given D as above, a separated OWL vocabulary (Section 4.2) is here further formalized into a set of URI references V', disjoint from the disallowed vocabulary (Section 4.2), with a
disjoint partition of V', written as V' = VO + VC + VD + VI + VOP + VDP + VAP + VXP, where the built-in OWL classes are in VC, the URI references all the datatype names of D and rdfs:Literal are in
VD, the OWL built-in annotation properties are in VAP, and the OWL built-in ontology properties are in VXP.
Definition: The translation of a separated OWL vocabulary, V' = VO + VC + VD + VI + VOP + VDP + VAP + VXP, written T(V'), consists of all the triples of the form
v rdf:type owl:Ontology . for v ∈ VO,
v rdf:type owl:Class . for v ∈ VC,
v rdf:type rdfs:Datatype . for v ∈ VD,
v rdf:type owl:Thing . for v ∈ VI,
v rdf:type owl:ObjectProperty . for v ∈ VOP,
v rdf:type owl:DatatypeProperty . for v ∈ VDP,
v rdf:type owl:AnnotationProperty . for v ∈ VAP, and
v rdf:type owl:OntologyProperty . for v ∈ VXP.
Definition: A collection of OWL DL ontologies and axioms and facts in abstract syntax form, O, (Section 2) with a separated vocabulary (Section 4) is here further formalized with the new notion of a
separated vocabulary V = VO + VC + VD + VI + VOP + VDP + VAP + VXP, as follows:
• All URI references used as ontology names are taken from VA, class IDs are taken from VC, datatype IDs are taken from VD, individual IDs are taken from VI, individual-valued property IDs are
taken from VOP, data-valued property IDs are taken from VDP, annotation property IDs are taken from VAP, and ontology property IDs are taken from VXP.
• All URI references used as individual IDs are given a type in some ontology in O.
• The ontologies and axioms and facts in O only use the class-only vocabulary as class IDs; only use the datatype-only vocabulary as datatype IDs; only use the property-only vocabulary as
datavaluedProperty IDs, individualvaluedProperty IDs, annotationProperty IDs, or ontologyProperty IDs; and do not mention any disallowed vocabulary.
The theorem to be proved is then: Let O and O' be collections of OWL DL ontologies and axioms and facts in abstract syntax form that are imports closed, such that their union has a separated
vocabulary. Then O direct entails O' if and only if T(O) OWL DL entails T(O').
A.1.1 Correspondence for Descriptions
Lemma 1: Let V' = VO + VC + VD + VI + VOP + VDP + VAP + VXP be a separated OWL vocabulary. Let V = VO ∪ VC ∪ VD ∪ VI ∪ VOP ∪ VDP ∪ VAP ∪ VXP ∪ VB. Let I'= <R,EC,ER,L,S,LV> be a direct interpretation
of V'. Let I = <R[I],P[I],EXT[I],S[I],L[I],LV[I]> be a OWL DL interpretation of V that satisfies T(V'), with LV[I] = LV. Let CEXT[I] have its usual meaning, and, as usual, overload I to map any
syntactic construct into its denotation. If
• R = R[I]
• EC(v)=CEXT[I](S[I](v)), for v∈VC∪VD
• ER(v)=EXT[I](S[I](v)), for v∈VOP∪VDP∪VAP∪VXP
• < x, S(owl:DeprecatedClass) > ∈ ER(rdf:type) iff < x, S[I](owl:DeprecatedClass) > EXT[I](S[I](rdf:type)), for x ∈ R
• < x, S(owl:DeprecatedProperty) > ∈ ER(rdf:type) iff < x, S[I](owl:DeprecatedProperty) > EXT[I](S[I](rdf:type)), for x ∈ R
• L(d)=L[I](d), for d any typed data literal
• S(v)=S[I](v) for v ∈ VI ∪ VC ∪ VD ∪ VOP ∪ VDP ∪ VAP ∪ VXP ∪ VO
then for d any abstract OWL description or data range over V',
1. I direct satisfies T(d), and
2. for any A mapping all the blank nodes of T(d) into R[I] where I+A OWL DL satisfies T(d)
1. CEXT[I](I+A(M(T(d)))) = EC(d),
2. if d is a description then I+A(M(T(d)))∈CEXT[I](S[I](owl:Class)), and
3. if d is a data range then I+A(M(T(d)))∈CEXT[I](S[I](rdfs:Datatype)).
The proof of the lemma is by a structural induction. Throughout the proof, let IOT = CEXT[I](I(owl:Thing)), IOC = CEXT[I](I(owl:Class)), IDC = CEXT[I](I(rdfs:Datatype)), IOOP = CEXT[I](I(
owl:ObjectProperty)), IODP = CEXT[I](I(owl:DatatypeProperty)), and IL = CEXT[I](I(rdf:List)).
To make the induction work, it is necessary to show that for any d a description or data range with sub-constructs T(d) contains triples for each of the sub-constructs that do not share any blank
nodes with triples from the other sub-constructs. This can easily be verified from the rules for T.
If p∈VOP then I satisfies p∈IOOP. Then, as I is an OWL DL interpretation, I satisfies <p,I(owl:Thing)>∈EXT[I](I(rdfs:domain)) and <p,I(owl:Thing)>∈EXT[I](I(rdfs:range)). Thus I satisfies T(p).
Similarly for p∈VDP.
Base Case: v ∈ VC, including owl:Thing and owl:Nothing
As v∈VC and I satisfies T(V), thus I(v)∈CEXT[I](I(owl:Class)). Because I is an OWL DL interpretation CEXT[I](I(v))⊆IOT, so <I(v),I(owl:Thing)>∈EXT[I](I(rdfs:subClassOf)). Thus I OWL DL satisfies
T(v). As M(T(v)) is v, thus CEXT[I](I(M(T(v))))=EC(v). Finally, from above, I(v)∈IOC.
Base Case: v ∈ VD, including rdfs:Literal
As v∈VD and I satisfies T(V), thus I(v)∈CEXT[I](I(rdfs:Datatype)). Because I is an OWL DL interpretation CEXT[I](I(v))⊆LV[I], so <I(v),I(rdfs:Literal)>∈EXT[I](I(rdfs:subClassOf)). Thus I
RDF-compatibile satisfies T(v). As M(T(v)) is v, thus CEXT[I](I(M(T(v))))=EC(v). Finally, from above I(v)∈IDC.
Base Case: d=oneOf(i[1]…i[n]), where the i[j] are individual IDs
As i[j]∈VI for 1≤j≤n and I satisfies T(V), thus I(i[j])∈IOT. The second comprehension principle for sequences then requires that there is some l∈IL that is a sequence of I(i[1]),…,I(i[n]) over
IOT. For any l that is a sequence of I(i[1]),…,I(i[n]) over IOT the comprehension principle for oneOf requires that there is some y∈CEXT[I](I(rdfs:Class)) such that <y,l> ∈ EXT[I](IS(owl:oneOf)).
From the third characterization of oneOf, y∈IOC. Therefore I satisfies T(d). For any I+A that satisfies T(d), CEXT[I](I+A(M(T(d)))) = {I(i[1]),…,I(i[n])} = EC(d). Finally, I+A(M(T(d)))∈IOC.
Base Case: d=oneOf(v[1]…v[n]), where the v[i] are data literals
Because I(v[j])∈LV[I], the second comprehension principle for sequences then requires that there is some l∈IL that is a sequence of I(v[1]),…,I(v[n]) over LV[I]. For any l that is a sequence of I
(v[1]),…,I(v[n]) over LV[I] the comprehension principle for oneOf requires that there is some y∈CEXT[I](I(rdfs:Class)) such that <y,l> ∈ EXT[I](IS(owl:oneOf)). From the second characterization of
oneOf, y∈IOC. Therefore I satisfies T(d). For any I+A that satisfies T(d), CEXT[I](I+A(M(T(d)))) = {I(i[1]),…,I(i[n])} = EC(d). Finally, I+A(M(T(d)))∈IDC.
Base Case: d=restriction(p value(i)), with p∈VOP∪VDP and i an individualID
As p∈VOP∪VDP, from above I satisfies T(p). As I satisfies T(V'), I(p)∈IOOP∪IODP. As i∈VI and I satisfies T(V'), I(i)∈IOT. From a comprehension principle for restriction, I satisfies T(d). For any
A such that I+A satisfies T(d), CEXT[I](I+A(M(T(d)))) = { x∈IOT | <x,I(i)> ∈ EXT[I](p) } = { x∈R | <x,S(i)> ∈ ER(p) } = EC(d). Finally, I+A(M(T(d)))∈IOC.
Base Case: d=restriction(p value(i)), with p∈VOP∪VDP and i a typed data value.
Base Case: d=restriction(p minCardinality(n)), with p∈VOP∪VDP and n a non-negative integer
Base Case: d=restriction(p maxCardinality(n)), with p∈VOP∪VDP and n a non-negative integer
Base Case: d=restriction(p Cardinality(n)), with p∈VOP∪VDP and n a non-negative integer
Inductive Case: d=complementOf(d')
From the induction hypothesis, I satisfies T(d'). As d' is a description, from the induction hypothesis there is a mapping, A, that maps all the blank nodes in T(d') into domain elements such
that I+A satisfies T(d') and I+A(M(T(d'))) = EC(d') and I+A(M(T(d')))∈IOC. The comprehension principle for complementOf then requires that there is a y∈IOC such that I+A satisfies <y,e>∈EXT[I](I(
owl:complementOf)) so I satisfies T(d). For any I+A that satisfies T(d), CEXT[I](I+A(M(T(d)))) = IOT-CEXT[I](I+A(M(T(d)))) = R-EC(d') = EC(d). Finally, I+A(M(T(d)))∈IOC.
Inductive Case: d = unionOf(d[1] … d[n])
From the induction hypothesis, I satisfies d[i] for 1≤i≤n so there is a mapping, A[i], that maps all the blank nodes in T(d[i]) into domain elements such that I+A[i] satisfies T(d[i]). As the
blank nodes in T(d[i]) are disjoint from the blank nodes of T(d[j]) for i≠j, I+A[1]+…+A[n], and thus I, satisfies T(d[i])∪…∪T(d[n]). Each d[i] is a description, so from the induction hypothesis,
I+A[1]+…+A[n](M(T(d[i])))∈IOC. The first comprehension principle for sequences then requires that there is some l∈IL that is a sequence of I+A[1]+…+A[n](M(T(d[1]))),…, I+A[1]+…+A[n](M(T(d[n])))
over IOC. The comprehension principle for unionOf then requires that there is some y∈IOC such that <y,l>∈EXT[I](I(owl:unionOf)) so I satisfies T(d).
For any I+A that satisfies T(d), I+A satisfies T(d[i]) so CEXT[I](I+A(d[i])) = EC(d[i])). Then CEXT[I](I+A(M(T(d)))) = CEXT[I](I+A(d[1]))∪…∪CEXT[I](I+A(d[n])) = EC(d[1])∪…∪EC(d[n]) = EC(d).
Finally, I(M(T(d)))∈IOC.
Inductive Case: d = intersectionOf(d[1] … d[n])
Inductive Case: d = restriction(p x[1] x[2] … x[n])
As p∈VOP∪VDP, from above I satisfies T(p). From the induction hypothesis, I satisfies restriction(p x[i]) for 1≤i≤n so there is a mapping, A[i], that maps all the blank nodes in T(restriction(p x
[i])) into domain elements such that I+A[i] satisfies T(restriction(p x[i])). As the blank nodes in T(restriction(p x[i])) are disjoint from the blank nodes of T(restriction(p x[j])) for i≠j, I+A
[1]+…+A[n], and thus I, satisfies T(restriction(p x[1] … x[n])). Each restriction(p x[i]) is a description, so from the induction hypothesis, M(T(restriction(p x[i])))∈IOC. The first
comprehension principle for sequences then requires that there is some l∈IL that is a sequence of I+A[1]+…+A[n](M(T(restriction(p x[i])))),…, I+A[1]+…+A[n](M(T(restriction(p x[i])))) over IOC.
The comprehension principle for intersectionOf then requires that there is some y∈IOC such that <y,l>∈EXT[I](I(owl:intersectionOf)) so I satisfies T(d).
For any I+A that satisfies T(d) I+A satisfies T(d[i]) so CEXT[I](I+A(d[i])) = EC(d[i])). Then CEXT[I](I+A(M(T(d)))) = CEXT[I](I+A(restriction(p x[i])))∩…∩ CEXT[I](I+A(restriction(p x[n]))) = EC
(restriction(p x[i]))cap;…∩EC(restriction(p x[i])) = EC(d). Finally, I(M(T(d)))∈IOC.
Inductive Case: d = restriction(p allValuesFrom(d')), with p∈VOP∪VDP and d' a description
As p∈VOP∪VDP, from above I satisfies T(p). From the induction hypothesis, I satisfies T(d'). As d' is a description, from the induction hypothesis, any mapping, A, that maps all the blank nodes
in T(d') into domain elements such that I+A satisfies T(d') has I+A(M(T(d'))) = EC(d') and I+A(M(T(d')))∈IOC. As p∈VOP∪VDP and I satisfies T(V'), I(p)∈IOOP∪IODP. The comprehension principle for
allValuesFrom restrictions then requires that I satisfies the triples in T(d) that are not in T(d') or T(p) in a way that shows that I satisfies T(d).
For any I+A that satisfies T(d), CEXT[I](I+A(M(T(d)))) = {x∈IOT | ∀ y∈IOT : <x,y>∈EXT[I](p) implies y∈CEXT[I](M(T(d')))} = {x∈R | ∀ y∈R : <x,y>∈ER(p) implies y∈EC(d')} = EC(d). Finally, I+A(M(T
Inductive Case: d = restriction(p someValuesFrom(d)) with p∈VOP∪VDP and d' a description
Inductive Case: d = restriction(p allValuesFrom(d)) with p∈VOP∪VDP and d' a data range
Inductive Case: d = restriction(p someValuesFrom(d)) with p∈VOP∪VDP and d' a data range
Lemma 1.1: Let V', V, I', and I be as in Lemma 1. Let d be an abstract OWL individual construct over V', (of the form Individual(…)). Then for any A mapping all the blank nodes of T(d) into R[I]
where I+A OWL DL satisfies T(d), I+A(M(T(d))) ∈ EC(d). Also, for any r ∈ EC(d) there is some A mapping all the blank nodes of T(d) into R[I] such that I+A(M(T(d))) = r.
A simple inductive argument shows that I+A(M(T(d))) must satisfy all the requirements of EC(d). Another inductive argument, depending on the non-sharing of blank nodes in sub-constructs, shows that
for each r ∈ EC(d) there is some A such that I+A(M(T(d))) = r.
A.1.2 Correspondence for Directives
Lemma 1.9: Let V', V, I', and I be as in Lemma 1. Let F be an OWL directive over V' with an annotation of the form annotation(p x). If F is a class or property axiom, let n be the name of the class
or property. If F is an individual axiom, let n be the main node of T(F). Then for any A mapping all the blank nodes of T(F) into R[I], I+A OWL DL satisfies the triples resulting from the annotation
iff I' direct satisfies the conditions resulting from the annotation.
For annotations to URI references, the lemma can be easily established by an inspection of the semantic condition and the translation triples. For annotations to Individual(…), the use of Lemma 1.1
is also needed.
Lemma 2: Let V', V, I', and I be as in Lemma 1. Let F be an OWL directive over V'. Then I satisfies T(F) iff I' satisfies F.
The main part of the proof is a structural induction over directives. Annotations occur in many directives and work exactly the same so they just require a use of Lemma 1.9. The rest of the proof
will thus ignore annotations. Deprecations can be handled in a similar fashion and will also be ignored in the rest of the proof.
Case: F = Class(foo complete d[1] … d[n]).
Let d=intersectionOf(d[1] … d[n]). As d is a description over V', thus I satisfies T(d) and for any A mapping the blank nodes of T(d) such that I+A satisfies T(d), CEXT[I](I+A(M(T(d)))) = EC(d).
Thus for any sub-description of d, d', CEXT[I](I+A(M(T(d')))) = EC(d'), and I+A(M(T(d')))∈IOC. Thus for some A mapping the blank nodes of T(d) such that I+A satisfies T(d), CEXT[I](I+A(M(T(d))))
= EC(d) and I+A(M(T(d)))∈IOC; and for each d' a sub-description of d, CEXT[I](I+A(M(T(d')))) = EC(d'), and I+A(M(T(d')))∈IOC.
If I' satisfies F then EC(foo) = EC(d). From above, there is some A such that CEXT[I](I+A(M(T(d)))) = EC(d) = EC(foo) = CEXT[I](I(foo)) and I+A(M(T(d)))∈IOC. Because I satisfies T(V), I(foo)∈IOC,
thus <I(foo),I+A(M(T(d)))> ∈ EXT[I](I(owl:equivalentClass)). Further, because of the semantic conditions on I(owl:intersectionOf), <I(foo),I+A(M(T(SEQ d[1] … d[n])))> ∈ EXT[I](I(
If d is of the form intersectionOf(d[1]) then CEXT[I](I+A(M(T(d[1])))) = EC(d[1]) = EC(d) = EC(foo) and I+A(M(T(d[1])))∈IOC. So, from the semantic conditions on I(owl:equivalentClass), <I
(foo),I+A(M(T(d[1])))> ∈ EXT[I](I(owl:equivalentClass)). If d[1] is of the form complementOf(d'[1]) then IOT - CEXT[I](I+A(M(T(d'[1])))) = CEXT[I](I+A(M(T(d[1])))) = EC(d[1]) = EC(d) = EC(foo)
and I+A(M(T(d'[1])))∈IOC. So, from the semantic conditions on I(owl:complementOf), <I(foo),I+A(M(T(d'[1])))> ∈ EXT[I](I(owl:complementOf)). If d[1] is of the form unionOf(d[11] … d[1m]) then CEXT
[I](I+A(M(T(d[11])))) ∪ … ∪ CEXT[I](I+A(M(T(d[1m])))) = CEXT[I](I+A(M(T(d[1])))) = EC(d[1]) = EC(d) = EC(foo) and I+A(M(T(d[1j])))∈IOC, for 1≤ j ≤ m. So, from the semantic conditions on I(
owl:unionOf), <I(foo),I+A(M(T(SEQ d[11] … d[1m])))> ∈ EXT[I](I(owl:unionOf)).
Therefore I satisfies T(F), for each potential T(F).
If I satisfies T(F) then I satisfies T(intersectionOf(d[1] … d[n])). Thus there is some A as above such that <I(foo),I+A(M(T(d)))> ∈ EXT[I](I(owl:equivalentClass)). Thus EC(d) = CEXT[I](I+A(M(T
(d)))) = CEXT[I](I(foo)) = EC(foo). Therefore I' satisfies F.
Case: F = Class(foo partial d[1] … d[n])
Let d=intersectionOf(d[1] … d[n]). As d is a description over V', thus I satisfies T(d) and for any A mapping the blank nodes of T(d) such that I+A satisfies T(d), CEXT[I](I+A(M(T(d)))) = EC(d).
Thus CEXT[I](I+A(M(T(d[i])))) = EC(d[i]), for 1 ≤ i ≤ n. Thus for some A mapping the blank nodes of T(d) such that I+A satisfies T(d), CEXT[I](I+A(M(T(d[i])))) = EC(d[i]), and I+A(M(T(d[i]))∈IOC,
for 1 ≤ i ≤ n.
If I' satisfies F then EC(foo) ⊆ EC(d[i]), for 1 ≤ i ≤ n. From above, there is some A such that CEXT[I](I+A(M(T(d[i])))) = EC(d[i]) ⊇ EC(foo) = CEXT[I](I(foo)) and I+A(M(T(d[i]))∈IOC. Because I
satisfies T(V), I(foo)∈IOC, thus <I(foo),I+A(M(T(d[i])))> ∈ EXT[I](I(rdfs:subClassOf)), for 1 ≤ i ≤ n. Therefore I satisfies T(F).
If I satisfies T(F) then I satisfies T(d[i]), for 1 ≤ i ≤ n. Thus there is some A as above such that <I(foo),I+A(M(T(d[i])))> ∈ EXT[I](I(rdfs:subClassOf)), for 1 ≤ i ≤ n. Thus EC(d) = CEXT[I](I+A
(M(T(d[i])))) ⊇ CEXT[I](I(foo)) = EC(foo), for 1 ≤ i ≤ n. Therefore I' satisfies F.
Case: F = EnumeratedClass(foo i[1] … i[n])
Let d=oneOf(i[1] … i[n]). As d is a description over V' so I satisfies T(d) and for some A mapping the blank nodes of T(d) such that I+A satisfies T(d), EC(d) = CEXT[I](I+A(M(T(d)))) = {S[I](M(T
(i[1])), … S[I](M(T(i[n]))} Also, S[I](M(T(i[j])) ∈ IOT, for 1 ≤ j ≤ n.
If I' satisfies F then EC(foo) = EC(d). From above, there is some A such that CEXT[I](I+A(M(T(d)))) = EC(d) = EC(foo) = CEXT[I](I(foo)) and I+A(M(T(d))∈IOC. Let e be I+A(M(T(SEQ i[1] … i[n]))).
Then, from the semantic conditions on I(owl:oneOf), <I(foo),e> ∈ EXT[I](I(owl:oneOf)). Therefore I satisfies T(F).
If I satisfies T(F) then I satisfies T(SEQ i[1] … i[n]). Thus there is some A as above such that <I(foo),I+A(M(T(SEQ i[1] … i[n])))> ∈ EXT[I](I(owl:oneOf)). Thus {S[I](M(T(i[1])), …, S[I](M(T(i
[n]))} = CEXT[I](I(foo)) = EC(foo). Therefore I' satisfies F.
Case: F = Datatype(foo)
The only thing that needs to be shown here is the typing for foo, which is similar to that for classes.
Case: F= DisjointClasses(d[1] … d[n])
As d[i] is a description over V' therefore I satisfies T(d[i]) and for any A mapping the blank nodes of T(d[i]) such that I+A satisfies T(d[i]), CEXT[I](I+A(M(T(d[i])))) = EC(d[i]).
If I satisfies T(F) then for 1≤i≤n there is some A[i] such that I satisfies <I+A[i](M(T(d[i]))),I+A[j](M(T(d[j])))> ∈ EXT[I](I(owl:disjointWith)) for each 1≤i<j≤n. Thus EC(d[i])∩EC(d[j]) = {},
for i≠j. Therefore I' satisfies F.
If I' satisfies F then EC(d[i])∩EC(d[j]) = {} for i≠j. For any A[i] and A[j] as above <I+A[i]+A[j](M(T(d[i]))),I+A[i]+A[j](M(T(d[j])))> ∈ EXT[I](I(owl:disjointWith)), for i≠j. As at least one A
[i] exists for each i, and the blank nodes of the T(d[j]) are all disjoint, I+A[1]+…+A[n] satisfies T(DisjointClasses(d[1] … d[n])). Therefore I satisfies T(F).
Case: F = EquivalentClasses(d[1] … d[n])
Case: F = SubClassOf(d[1] d2)
Somewhat similar.
Case: F = ObjectProperty(p super(s[1]) … super(sn) domain(d[1]) … domain(d[m]) range(r[1]) … range(r[k]) [inverse(i)] [Symmetric] [Functional] [InverseFunctional] [OneToOne] [Transitive])
As d[i] for 1≤i≤m is a description over V' therefore I satisfies T(d[i]) and for any A mapping the blank nodes of T(d[i]) such that I+A satisfies T(d[i]), CEXT[I](I+A(M(T(d[i])))) = EC(d[i]).
Similarly for r[i] for 1≤i≤k.
If I' satisfies F, then, as p∈VOP, I satisfies I(p)∈IOOP. Then, as I is an OWL DL interpretation, I satisfies <I(p),I(owl:Thing)>∈EXT[I](I(rdfs:domain)) and <I(p),I(owl:Thing)>∈EXT[I](I(
rdfs:range)). Also, ER(p)⊆ER(s[i]) for 1≤i≤n, so EXT[I](I(p))=ER(p) ⊆ ER(s[i])=EXT[I](I(s[i])) and I satisfies <I(p),I(s[i])>∈EXT[I](I(rdfs:subPropertyOf)). Next, ER(p)⊆EC(d[i])×R for 1≤i≤m, so
<z,w>∈ER(p) implies z∈EC(d[i]) and for any A such that I+A satisfies T(d[i]), <z,w>∈EXT[I](p) implies z∈CEXT[I](I+A(M(T(d[i])))) and thus <I(p),I+A(M(T(d[i])))>∈EXT[I](I(rdfs:domain)). Similarly
for r[i] for 1≤i≤k.
If I' satisfies F and inverse(i) is in F, then ER(p) and ER(i) are converses. Thus <u,v>∈ER(p) iff <v,u>∈ER(i) so <u,v>∈EXT[I](p) iff <v,u>∈EXT[I](i) and I satisfies <I(p),I(i)>∈EXT[I](I(
owl:inverseOf)). If I' satisfies F and Symmetric is in F, then ER(p) is symmetric. Thus if <x,y>∈ ER(p) then <y,x>∈ER(p) so if <x,y> ∈ EXT[I](p) then <y, x>∈EXT[I](p). and thus I satisfies p∈CEXT
[I](I(owl:Symmetric)). Similarly for Functional, InverseFunctional, and Transitive. Thus if I' satisfies F then I satisfies T(F).
If I satisfies T(F) then, for 1≤i≤n, <I(p),I(s[i])>∈EXT[I](I(rdfs:subPropertyOf)) so ER(p)=EXT[I](I(p)) ⊆ EXT[I](I(s[i]))=ER(s[i]). Also, for 1≤i≤m, for some A such that I+A satisfies T(d[i]), <I
(p),I+A(M(T(d[i])))>∈EXT[I](I(rdfs:domain)) so <z,w>∈EXT[I](p) implies z∈CEXT[I](I+A(M(T(d[i])))). Thus <z,w>∈ER(p) implies z∈EC(d[i]) and ER(p)⊆EC(d[i])×R. Similarly for r[i] for 1≤i≤k.
If I satisfies T(F) and inverse(i) is in F, then I satisfies <I(p),I(i)>∈EXT[I](I(owl:inverseOf)). Thus <u,v>∈EXT[I](p) iff <v,u>∈EXT[I](i) so <u,v>∈ER(p) iff <v,u>∈ER(i) and ER(p) and ER(i) are
converses. If I satisfies F and Symmetric is in F, then I satisfies p∈CEXT[I](I(owl:Symmetric)) so if <x,y> ∈ EXT[I](p) then <y, x>∈EXT[I](p). Thus if <x,y>∈ ER(p) then <y,x>∈ER(p) and ER(p) is
symmetric. Similarly for Functional, InverseFunctional, and Transitive. Thus if I satisfies T(F) then I' satisfies F.
Case: F = DatatypeProperty(p super(s[1]) … super(sn) domain(d[1]) … domain(d[m]) range(r[1]) … range(r[l]) [Functional])
Similar, but simpler.
Case: F = AnnotationProperty(p domain(d[1]) … domain(d[m]))
Similar, but even simpler.
Case: F = OntologyProperty(p domain(d[1]) … domain(d[m]))
Similar, but even simpler.
Case: F = EquivalentProperties(p[1] … p[n]), for p[i]∈VOP
As p[i]∈VOP and I satisfies T(V'), I(p[i])∈IOOP. If I satisfies T(F) then <I(p[i]),I(p[j])> ∈ EXT[I](I(owl:equivalentProperty)), for each 1≤i<j≤n. Therefore EXT[I](p[i]) = EXT[I](p[j]), for each
1≤i<j≤n; ER(p[i]) = ER(p[j]), for each 1≤i<j≤n; and I' satisfies F.
If I' satisfies F then ER(p[i]) = ER(p[j]), for each 1≤i<j≤n. Therefore EXT[I](p[i]) = EXT[I](p[j]), for each 1≤i<j≤n. From the OWL DL definition of owl:equivalentProperty, <I(p[i]),I(p[j])> ∈
EXT[I](I(owl:equivalentProperty)), for each 1≤i<j≤n. Thus I satisfies T(F).
Case: F = SubPropertyOf(p[1] p[2])
Somewhat similar, but simpler.
Case: F = SameIndividual(i[1] … i[n])
Similar to SamePropertyAs.
Case: F = DifferentIndividuals(i[1] … i[n])
Similar to SamePropertyAs.
Case: F = Individual([i] type(t[1]) … type(t[n]) value(p[1] v[1]) … value(p[n] v[n]))
If I satisfies T(F) then there is some A that maps each blank node in T(F) such that I+A satisfies T(F). A simple examination of T(F) shows that the mappings of A plus the mappings for the
individual IDs in F, which are all in IOT, show that I' satisfies F.
If I' satisfies F then for each Individual construct in F there must be some element of R that makes the type relationships and relationships true in F. The triples in T(F) then fall into three
categories. 1/ Type relationships to owl:Thing, which are true in I because the elements above belong to R. 2/ Type relationships to OWL descriptions, which are true in I because they are true in
I', from Lemma 1. 3/ OWL property relationships, which are true in I' because they are true in I. Thus I satisfies T(F).
A.1.3 From RDF Semantics to Direct Semantics
Lemma 3: Let V' = VO + VC + VD + VI + VOP + VDP + VAP + VXP be a separated OWL vocabulary. Let V = VO ∪ VC ∪ VD ∪ VI ∪ VOP ∪ VDP ∪ VAP ∪ VXP ∪ VB. Then for every OWL DL interpretation I = <R[I],P
[I],EXT[I],S[I],L[I],LV[I]> of V that satisfies T(V') there is an direct interpretation I' of V' such that for any collection of OWL abstract ontologies and axioms and facts O with vocabulary V' such
that O is imports closed, I' direct satisfies O iff I OWL DL satisfies T(O).
Let CEXT[I] be defined as usual from I. The required direct interpretation will be I' = < R[I], EC, ER, L, S, LV[I] > where
1. EC(v) = CEXT[I](S[I](v)), for v∈VC∪VD
2. ER(v) = EXT[I](S[I](v)), for v∈VOP∪VDP∪VAP∪VXP
3. < x, S(owl:DeprecatedClass) > ∈ ER(rdf:type) iff < x, S[I](owl:DeprecatedClass) > EXT[I](S[I](rdf:type)), for x ∈ R
4. < x, S(owl:DeprecatedProperty) > ∈ ER(rdf:type) iff < x, S[I](owl:DeprecatedProperty) > EXT[I](S[I](rdf:type)), for x ∈ R
5. L(d) = L[I](d), for d a typed literal
6. S(v) = S[I](v), for n∈VI ∪ VC ∪ VD ∪ VOP ∪ VDP ∪ VAP ∪ VXP ∪ VO
V', V, I', and I meet the requirements of Lemma 2, so for any directive D over V' I satisfies T(D) iff I' satisfies D.
Because O is imports closed, O includes all the ontologies that would be imported in T(O) so the importing part of imports directives will be handled the same. Satisfying an abstract ontology is just
satisfying its directives and satisfying the translation of an abstract ontology is just satisfying all the triples so I OWL DL satisfies T(K) iff I' direct satisfies K.
A.1.4 From Direct Semantics to RDF Semantics
Lemma 4: Let V' = VO + VC + VD + VI + VOP + VDP + VAP + VXP be a separated OWL vocabulary. Let V = VO ∪ VC ∪ VD ∪ VI ∪ VOP ∪ VDP ∪ VAP ∪ VXP ∪ VB. Then for every direct interpretation I' = < U, EC,
ER, L, S, LV > of V' there is an OWL DL interpretation I of V that satisfies T(V') such that for any collection of OWL abstract ontologies and axioms and facts O with vocabulary V' such that O is
imports closed, I' direct satisfies O iff I OWL DL satisfies T(O).
Construct I = < R[I], P[I], EXT[I], S[I], L, LV[I] > as follows:
• Let IOC be all the OWL descriptions over V', plus the class-only vocabulary.
Let IDC be all the OWL data ranges over V'.
Let IOP = VOP ∪ VDP ∪ VAP ∪ VXP, plus the property-only vocabulary.
Let IX = VO.
Let IL be rdf:nil plus finite sequences over U, over IOC, and over LV.
Let IAD be rdf:nil plus the finite sequences over U.
Let IV be the disallowed vocabulary minus rdf:nil.
• Let R[I] be the disjoint union of U, IOC, IDC, IOP, IX, IL, IAD, and IV.
• Let P[I] be IOP ∪ IV
• Let LV[I] = LV.
Let S[I](n) = S(n) for n∈VI.
Let S[I](n) = n for n∈V-VI.
• For c∈IOC, CEXT[I](c)=EC(c), if defined, otherwise {}.
For d∈IDC, CEXT[I](d)=EC(d).
For x∈U∪IL∪IAD∪IOP∪IX, CEXT[I](x)={}.
• CEXT[I](S[I](rdf:type)) = {}
CEXT[I](S[I](rdf:Property)) = P[I]
CEXT[I](S[I](rdf:List)) = IL
CEXT[I](S[I](rdf:XMLLiteral)) is the RDF XML literal values
CEXT[I](S[I](rdf:first))= CEXT[I](S[I](rdf:rest)) = CEXT[I](S[I](rdfs:domain)) = CEXT[I](S[I](rdfs:range)) = {}
CEXT[I](S[I](rdfs:Resource)) = R[I]
CEXT[I](S[I](rdfs:Literal)) = LV[I]
CEXT[I](S[I](rdfs:Datatype)) = D
CEXT[I](S[I](rdfs:Class)) = IOC ∪ IV
CEXT[I](S[I](rdfs:subClassOf)) = CEXT[I](S[I](rdfs:subPropertyOf)) = CEXT[I](S[I](rdfs:member)) = {}
CEXT[I](S[I](rdfs:Container)) = CEXT[I](S[I](rdf:Seq)) ∪ CEXT[I](S[I](rdf:Bag)) ∪ CEXT[I](S[I](rdf:Alt))
CEXT[I](S[I](rdfs:ContainerMembershipProperty)) is the RDF container membership properties
• For r∈IOP, EXT[I](r)=ER(r), if defined, otherwise {}.
• EXT[I](S[I](rdf:type)) is determined by CEXT[I].
EXT[I](S[I](rdf:Property)) = EXT[I](S[I](rdf:List)) = EXT[I](S[I](rdf:XMLLiteral)) = {}
<x,y> ∈ EXT[I](rdf:first) iff x∈IL and y is its first element
<x,y> ∈ EXT[I](rdf:rest) iff x∈IL and y is its tail
EXT[I](S[I](rdfs:domain)) = { <x,y> : x∈IOP y∈IOC ∧ ∀ w, <w,z> ∈ EXT[I](x) implies w ∈ CEXT[I](y) }
EXT[I](S[I](rdfs:range)) = { <x,y> : x∈OP y∈IOC ∧ ∀ z, <w,z> ∈ EXT[I](x) implies z ∈ CEXT[I](y) }
EXT[I](S[I](rdfs:Resource)) = EXT[I](S[I](rdfs:Literal)) = EXT[I](S[I](rdfs:Datatype)) = EXT[I](S[I](rdfs:Class)) = {}
EXT[I](S[I](rdfs:subClassOf)) = { <x,y> : x,y∈IOC ∧ CEXT[I](x) ⊆ CEXT[I](y) }
EXT[I](S[I](rdfs:subPropertyOf)) = { <x,y> : x,y∈OP ∧ EXT[I](x) ⊆ EXT[I](y) }
EXT[I](S[I](rdfs:member)) is the union of the extensions of the container membership properties
EXT[I](S[I](rdfs:Container)) = EXT[I](S[I](rdfs:ContainerMembershipProperty)) = {}
• The extensions of owl:allValuesFrom, owl:cardinality, owl:hasValue, owl:maxCardinality, owl:minCardinality, owl:onProperty, and owl:someValuesFrom are as necessary to link the elements of IOC and
IDC up with their parts. Their class extensions are all empty.
• The extensions of owl:complementOf, owl:intersectionOf, owl:oneOf, and owl:unionOf are as necessary to make their semantic conditions work out correctly. Their class extensions are all empty.
This can easily be done here because modifying these extensions does not induce any loops in the process.
• CEXT[I](S[I](owl:AnnotationProperty)) = VAP
CEXT[I](S[I](owl:OntologyProperty)) = VXP
CEXT[I](S[I](owl:Class)) = IOC
CEXT[I](S[I](owl:DatatypeProperty)) = VDP
CEXT[I](S[I](owl:FunctionalProperty)) is those elements of VOP∪VDP whose extension is partial functional
CEXT[I](S[I](owl:InverseFunctionalProperty)) is those elements of VOP whose extension is inverse partial functional
CEXT[I](S[I](owl:ObjectProperty)) = VOP
CEXT[I](S[I](owl:Ontology)) = VO
CEXT[I](S[I](owl:Restriction)) is those elements of IOC that are OWL restrictions
CEXT[I](S[I](owl:SymmetricProperty)) is those elements of VOP whose extension is symmetric
CEXT[I](S[I](owl:TransitiveProperty)) is those elements of VOP whose extension is transitive
EXT[I](S[I](owl:AnnotationProperty)) = EXT[I](S[I](owl:OntologyProperty)) = EXT[I](S[I](owl:Class)) = EXT[I](S[I](owl:DatatypeProperty)) = EXT[I](S[I](owl:FunctionalProperty)) = EXT[I](S[I](
owl:InverseFunctionalProperty)) = EXT[I](S[I](owl:ObjectProperty)) = EXT[I](S[I](owl:Ontology)) = EXT[I](S[I](owl:Restriction)) = EXT[I](S[I](owl:SymmetricProperty)) = EXT[I](S[I](
owl:TransitiveProperty)) = {}
• CEXT[I](S[I](owl:AllDifferent)) consists of those elements of IAD that have an owl:distinctMembers property
EXT[I](S[I](owl:differentFrom)) is the inequality relation on U
EXT[I](S[I](owl:disjointWith)) relates members of IOC that have the disjoint class extensions
EXT[I](S[I](owl:distinctMembers)) relates elements of IAD to their copy in IL, but only for sequences of distinct individuals
EXT[I](S[I](owl:equivalentClass)) relates members of IOC that have the same class extension
EXT[I](S[I](owl:equivalentProperty)) relates members of IOP∪IDP that have the same extension
EXT[I](S[I](owl:inverseOf)) relates members of IOP whose extensions are inverses of each other
EXT[I](S[I](owl:sameAs)) is the equality relation on U
EXT[I](S[I](owl:AllDifferent)) = CEXT[I](S[I](owl:differentFrom)) = CEXT[I](S[I](owl:disjointWith)) = CEXT[I](S[I](owl:distinctMembers)) = CEXT[I](S[I](owl:equivalentClass)) = CEXT[I](S[I](
owl:equivalentProperty)) = CEXT[I](S[I](owl:inverseOf)) = CEXT[I](S[I](owl:sameAs)) = {}
• CEXT[I](S[I](owl:DeprecatedClass)) = { x ∈ IOC∪IDC : <x,S(owl:DeprecatedClass)>∈ER(rdf:type) }
CEXT[I](S[I](owl:DeprecatedProperty)) = { x ∈ IOOP∪IODP : <x,S(owl:DeprecatedProperty)>∈ER(rdf:type) }
EXT[I](S[I](owl:DeprecatedClass)) = EXT[I](S[I](owl:DeprecatedProperty)) = {}
Then I is an OWL DL interpretation because the conditions for the class extensions in OWL DL match up with the conditions for class-like OWL abstract syntax constructs.
V', V, I', and I meet the requirements of Lemma 2, so for any directive D over V' I satisfies T(D) iff I' satisfies D.
Because O is imports closed, O includes all the ontologies that would be imported in T(O) the importing part of imports directives will be handled the same. Satisfying an abstract ontology is just
satisfying its directives and satisfying the translation of an abstract ontology is just satisfying all the triples so I OWL DL satisfies T(K) iff I' direct satisfies K.
A.1.5 Correspondence Theorem
Theorem 1: Let O and O' be collections of OWL DL ontologies and axioms and facts in abstract syntax form that are imports closed, such that their union has a separated vocabulary, V', and every URI
reference in V' is used in O. Then O entails O' if and only if T(O) OWL DL entails T(O').
Then I satisfies T(V'), because each URI reference in V' is used on O.
Proof: Suppose O entails O'. Let I be an OWL DL interpretation that satisfies T(O). Then from Lemma 3, there is some direct interpretation I' such that for any abstract OWL ontology or axiom or fact
X over V', I satisfies T(X) iff I' satisfies X. Thus I' satisfies each ontology in O. Because O entails O', I' satisfies O', so I satisfies T(O'). Thus T(K),T(V') OWL DL entails T(Q).
Suppose T(O) OWL DL entails T(O'). Let I' be an direct interpretation that satisfies K. Then from Lemma 4, there is some OWL DL interpretation I such that for any abstract OWL ontology X over V', I
satisfies T(X) iff I' satisfies X. Thus I satisfies T(O). Because T(O) OWL DL entails T(O'), I satisfies T(O'), so I' satisfies O'. Thus O entails O'.
A.2 Correspondence between OWL DL and OWL Full
This section contains a proof sketch concerning the relationship between OWL DL and OWL Full. This proof has not been fully worked out. Significant effort may be required to finish the proof and some
details of the relationship may have to change.
Let K be an RDF graph. An OWL interpretation of K is an OWL interpretation (from Section 5.2) that is an D-interpretation of K.
Lemma 5: Let V be a separated vocabulary. Then for every OWL intepretation I there is an OWL DL interpretation I' (as in Section 5.3) such that for K any OWL ontology in the abstract syntax with
separated vocabulary V, I is an OWL interpretation of T(K) iff I' is an OWL DL interpretation of T(K).
Proof sketch: As all OWL DL interpretations are OWL interpretations, the reverse direction is obvious.
Let I = < R[I], EXT[I], S[I], L[I] > be an OWL interpretation that satisfies T(K). Let I' = < R[I'], EXT[I'], S[I'], L[I'] > be an OWL interpretation that satisfies T(K). Let R[I'] = CEXT[I](I(
owl:Thing)) + CEXT[I](I(owl:ObjectProperty)) + CEXT[I](I(owl:ObjectProperty)) + CEXT[I](I(owl:Class)) + CEXT[I](I(rdf:List)) + R[I], where + is disjoint union. Define EXT[I'] so as to separate the
various roles of the copies. Define S[I'] so as to map vocabulary into the appropriate copy. This works because K has a separated vocabulary, so I can be split according the the roles, and there are
no inappropriate relationships in EXT[I]. In essence the first component of R[I'] is OWL individuals, the second component of R[I'] is OWL datatype properties, the third component of R[I'] is OWL
individual-valued properties, the fourth component of R[I'] is OWL classes, the fifth component of R[I'] is RDF lists, and the sixth component of R[I'] is everything else.
Theorem 2: Let O and O' be collections of OWL DL ontologies and axioms and facts in abstract syntax form that are imports closed, such that their union has a separated vocabulary (Section 4.2). Then
the translation of O OWL Full entails the translation of O' if the translation of O OWL DL entails the translation of O'.
Proof: From the above lemma and because all OWL Full interpretations are OWL interpretations.
Note: The only if direction is not true.
Appendix B. Examples (Informative)
This appendix gives examples of the concepts developed in the rest of the document.
B.1 Examples of Mapping from Abstract Syntax to RDF Graphs
The transformation rules in Section 4 can transform the ontology
value(ex:author Individual(type(ex:Person) value(ex:name "Fred"^^xsd:string)))))
ex:ontology rdf:type owl:Ontology .
ex:name rdf:type owl:DatatypeProperty .
ex:author rdf:type owl:ObjectProperty .
ex:Book rdf:type owl:Class .
ex:Person rdf:type owl:Class .
_:x rdf:type ex:Book .
_:x ex:author _:x1 .
_:x1 rdf:type ex:Person .
_:x1 ex:name "Fred"^^xsd:string .
Class(ex:Student complete ex:Person
restriction(ex:enrolledIn allValuesFrom(ex:School) minCardinality(1))))
can be transformed to
ex:ontology2 rdf:type owl:Ontology .
ex:enrolledIn rdf:type owl:ObjectProperty .
ex:Person rdf:type owl:Class .
ex:School rdf:type owl:Class .
ex:Student rdf:type owl:Class .
ex:Student owl:equivalentClass _:x .
_:x owl:intersectionOf _:l1 .
_:l1 rdf:first ex:Person .
_:l1 rdf:rest _:l2 .
_:l2 rdf:first _:lr .
_:l2 rdf:rest rdf:nil .
_:lr owl:intersectionOf _:lr1 .
_:lr1 rdf:first _:r1 .
_:lr1 rdf:rest _:lr2 .
_:lr2 rdf:first _:r2 .
_:lr2 rdf:rest rdf:nil .
_:r1 rdf:type owl:Restriction .
_:r1 owl:onProperty ex:enrolledIn .
_:r1 owl:allValuesFrom ex:School .
_:r2 rdf:type owl:Restriction .
_:r2 owl:onProperty ex:enrolledIn .
_:r2 owl:minCardinality "1"^^xsd:nonNegativeInteger .
B.2 Examples of Entailments in OWL DL and OWL Full
OWL DL supports the entailments that one would expect, as long as the vocabulary can be shown to belong to the appropriate piece of the domain of discourse. For example,
John friend Susan .
does not OWL DL entail
John rdf:type owl:Thing .
Susan rdf:type owl:Thing .
friend rdf:type owl:ObjectProperty .
The above three triples would have to be added before the following restriction could be concluded
John rdf:type _:x .
_:x owl:onProperty friend .
_:x owl:minCardinality "1"^^xsd:nonNegativeInteger .
However, once this extra information is added, all natural entailments follow, except for those that involve descriptions with loops. For example,
John rdf:type owl:Thing .
friend rdf:type owl:ObjectProperty .
John rdf:type _:x .
_:x owl:onProperty friend .
_:x owl:maxCardinality "0"^^xsd:nonNegativeInteger .
does not entail
John rdf:type _:y .
_:y owl:onProperty friend .
_:y owl:allValuesFrom _:y .
because there are no comprehension principles for such looping descriptions. It is precisely the lack of such comprehension principles that prevent the formation of paradoxes in OWL DL while still
retaining natural entailments.
In OWL DL one can repair missing localizations in any separated-syntax KB by adding a particular set of localizing assertions consisting of all triples of the form
<individual> rdf:type owl:Thing .
<class> rdf:type owl:Class .
<oproperty> rdf:type owl:ObjectProperty .
<dtproperty> rdf:type owl:DatatypeProperty .
Call the result of adding all such assertions to a OWL DL KB the localization of the KB.
OWL Full supports the entailments that one would expect, and there is no need to provide typing information for the vocabulary. For example,
John friend Susan .
does OWL Full entail
John rdf:type _:x .
_:x owl:onProperty friend .
_:x owl:minCardinality "1"^^xsd:nonNegativeInteger .
Appendix C. Changes from Last Call (Informative)
This appendix provides an informative account of the changes from the last-call version of this document. All substantive post-last call changes to the document, as well as some editorial
post-last-call changes, are indicated in the style of this appendix.
C.1 Substantive changes after Last Call
This section provides information on the post Last Call changes to the document that make changes to the specification of OWL.
• [10 April 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Apr/0046.html, added owl:Class, owl:Restriction, owl:ObjectProperty, owl:DatatypeProperty,
owl:AnnotationProperty, owl:OntologyProperty, owl:Ontology, owl:AllDifferent, owl:FunctionalProperty, owl:InverseFunctionalProperty, owl:SymmetricProperty, and owl:TransitiveProperty to C[I] in
Section 5.2. Some of these were inferrable already.
• [10 April 2003] Related to http://lists.w3.org/Archives/Public/www-webont-wg/2003Apr/0046.html, added owl:distinctMembers to R[I] in Section 5.2.
• [15 April 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Apr/0064.html, added owl:OntologyProperty to the disallowed vocabulary in disallowed OWL vocabulary in Section
• [5 May 2003] Per a decision of the Web Ontology working group on 1 May 2003 to add owl:Nothing to OWL Lite, recorded in http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0017.html,
changed the introduction of owl:Nothing to so indicate. The index for owl:Nothing was also updated.
• [9 May 2003] To improve internal consistency, added optional rdf:Property types for Annotation Properties in Section 4.1.
• [30 May 2003] Per a decision of the Web Ontology working group on 29 May 2003 to modify the mapping of EquivalentClasses, recorded in http://lists.w3.org/Archives/Public/www-webont-wg/2003May/
0402.html and in response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Apr/0003.html and http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0052.html, changed the
mapping rule for EquivalentClasses(d1 ... dn) to T(di) owl:equivalentTo T(dj) . for all <i,j> in G where G is a set of pairs over {1,...,n} that if interpreted as an undirected graph forms a
connected graph for {1,...,n}.
• [30 May 2003] Per a decision of the Web Ontology working group on 29 May 2003 to add axioms for ontology properties, recorded in http://lists.w3.org/Archives/Public/www-webont-wg/2003May/
0402.html, added axioms for ontology properties to the OWL Lite and OWL DL abstract syntax in Sections 2.3.1.3. and Section 2.3.2.4; added direct semantics conditions for ontology property axioms
in Section 3.3; and added a mapping for ontology property axioms in Section 4.1. Fixed the proofs of Lemma 2 and Lemma 3.
• [30 May 2003] Per a decision of the Web Ontology working group on 29 May 2003 to change the semantics for owl:intersectionOf and related resources from an intensional semantics to an extensional
semantics, recorded in http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0402.html, modified the semantic conditions for owl:intersectionOf, owl:unionOf, owl:complementOf, and owl:oneOf
in Section 5.2. No change needed to be made to the proof of Lemma 1. Fixed the proofs of Lemma 4 and Lemma 2.
• [2 June 2003] In response to an observation by Jeremy Carroll in http://lists.w3.org/Archives/Public/www-webont-wg/2003Jun/0004.html, changed the mapping rule for anonymous individuals with no
types slightly in Section 4.1.
• [4 June 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Jun/0011.html, the treatment of datatypes and rdfs:Literal has been slightly changed in Section 2.3.1.3, Section
2.3.2.3, and Section 4.1.
• [4 June 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0050.html, point owlsas-rdf-equivalent-class, modified the treatment of ontology annotations in
Section 3.4.
• [5 June 2003] In response to a comment by Jeremy Carroll in http://lists.w3.org/Archives/Public/www-webont-wg/2003Jun/0004.html, the direct semantics has been modified to allow for domain
elements that are not OWL individuals. These domain elements are used to provide meaning for annotations on classes, properties, and ontologies. Changes have been made in Section 3.1, Section 3.2
, Section 3.3, and Appendix A.1.
• [6 June 2003] Changed the treatment of datatypes to correspond with the substantive post-last-call fixes and changes to the treatment of datatypes in RDF. Changes have been made in Section 3.1
and Appendix A.1.
• [26 June 2003] Per a decision of the Web Ontology working group on 26 June 2003 to replace owl:sameIndividualAs with owl:sameAs, recorded in http://lists.w3.org/Archives/Public/www-webont-wg/
2003Jun/0364.html, made changes to Section 2.2, Section 3.3, Section 4.1, Section 4.2, Section 5.2, and Appendix A.1.
• [30 June 2003] Fixed a bug in the semantic conditions for owl:hasValue noticed by Jeremy Carroll, changing the conditions for the value from a property to an individual or a data value in Section
• [23 July 2003] In response to a substantive post-last-call change to the RDF semantics, changing the if-and-only-if conditions for rdfs:subClassOf and rdfs:subPropertyOf to only-if conditions,
added if-and-only-if conditions for rdfs:subClassOf, over OWL classes, and rdfs:subPropertyOf, over OWL individual-valued properties and over OWL datatype properties, to Section 5.2.
• [23 July 2003] In response to a substantive change to the RDF syntax mapping to triples, removing the typing triples for collections, [applicable document unknown], made typing of list resources
optional in Section 4.1. Also modified an example in Appendix B.1.
C.2 Editorial changes after Last Call
This section provides information on post Last Call editorial changes to the document, i.e., changes that do not affect the specification of OWL.
• [9 April 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0023.html, point 2, changed ``most information about properties'' to ``most information concerning
properties'' in Section 2.3.
• [14 April 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0029.html, point 1, added ``Because there is no standard way to go from a URI reference to an XML
Schema datatype in an XML Schema, there is no standard way to use user-defined XML Schema datatypes in OWL.'' to the discussion of allowable XML Schema datatypes in Section 2.
• [14 April 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0029.html, point 2.1, added ``(The property rdf:type is added to the annotation properties so as
to provide a meaning for deprecation, see below.)'' after ``ER provides meaning for URI references that are used as OWL properties.'' in Section 3.1.
• [14 April 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0029.html, point 2.3, added ``A datatype theory must contain datatypes for xsd:string and
xsd:integer. It may contain datatypes for the other built-in XML Schema datatypes that are suitable for use in OWL. It may also contain other datatypes, but there is no provision in the OWL
syntax for conveying what these datatypes are.'' just after the definition of a datatype theory in Section 3.1.
• [14 April 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0029.html, point 2.5, added ``annotations'' the the list of things that EC is extended to in
Section 3.2.
• [9 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0050.html, point owlsas-rdf-datatype-denotation, removed the phrase ``as in RDF'' from Section 2.1.
• [9 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0050.html, point owlsas-rdf-equivalent-class, added an explanation of why one might admit
EquivalentClasses with only one description in Section 2.3.2.1.
• [9 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0050.html, added ``, for n>=1 '' in the semantic condition for multi-restrictions in Section 3.2.
• [9 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0050.html, changed to ``include class identifiers and restrictions'' in Section 2.3.2.2 and
``Elements of the OWL vocabulary that construct descriptions'' in Section 5.2.
• [13 May 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0180.html, the links in the table of contents for in Appendix A were fixed.
• [14 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0030.html, added a new paragraph to the beginning of Section 4.
• [14 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0057.html, made some changes to the wording on OWL ontologies in the abstract syntax near the
beginning of Section 2.1.
• [14 May 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0057.html, added anchors to the transformations in Section 4.1.
• [14 May 2003] In response to some discussion about ontology names changed the discussion of the purpose of ontology names in Section 2.1.
• [22 May 2003] In response to a message from Jeff Heflin, http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0302.html, added a comment to the effect tools should determine entailment
between imports closures in Section 5.3 and Section 5.4. (Removed on 27 May 2003.)
• [22 May 2003] Changed ``consistent with the Web'' to ``imports closed'' Section 5.3 and Appendix A.
• [26 May 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0335.html, changed several `if' to `iff' in definitions in Section 3.4, Section 5.3, and Section 5.4. This
is editorial as complete definitions are often written using `if'.
• [30 May 2003] Fixed a typographical error in the proof of Lemma 4.
• [4 June 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0029.html, points 2.2 and 2.3, the status of rdfs:Literal and rdf:XMLLiteral has been clarified in
Section 2, Section 2.1, and Section 4.2.
• [19 June 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Jun/0257.html, changed to note after the proof of Theorem 2 in Appendix A.2 to note that the converse of the
theorem is not true.
• [19 June 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003May/0055.html, changed some explanatory text concerning the transformation to triples in Section 4.1.
• [19 June 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Jun/0264.html, added introductory material about the other WebOnt documents to Section 1.
• [24 June 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003May/0069.html, added note about correspondence to existing DLs to Section 2.
• [22 July 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Jul/0011.html and http://lists.w3.org/Archives/Public/public-webont-comments/2003Jul/0041.html,
changed several uses of ``object'' to ``individual'' or ``individual-valued'' in Section 2 and Section 5.2 and made other editorial changes to Section 5.2.
• [23 July 2003] To remove any reference to tools, made wording changes in Section 3.1 and Section 2.1, concerning the treatment of datatypes.
• [25 July 2003] In response to http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0064.html, added explicit tagging of the informative or normative nature of all sections.
• [25 July 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Jul/0296.html, removed a comment about the relationship between the two model theories from Section 1.
• [6 August 2003] In response to http://lists.w3.org/Archives/Public/www-webont-wg/2003Jul/0015.html, changed owl:IndividualProperty to owl:ObjectProperty in Appendix A.2.
• [6 August 2003] In response to Section 2.1, concerning the treatment http://lists.w3.org/Archives/Public/public-webont-comments/2003Apr/0064.html, added quotes around rdfs:Literal to indicate
that it is a terminal, not a non-terminal, in Section 2.3.1.3 and Section 2.3.2.3.
C.3 Substantive changes after Candidate Recommentation
This section provides information on the post Candidate Recommendation changes to the document that make changes to the specification of OWL.
C.4 Editorial changes after Candidate Recommentation
This section provides information on post Candidate Recommentation editorial changes to the document, i.e., changes that do not affect the specification of OWL.
The following table provides pointers to information about each element of the OWL vocabulary, as well as some elements of the RDF and RDFS vocabularies. The first column points to the vocabulary
element's major definition in the abstract syntax of Section 2. The second column points to the vocabulary element's major definition in the OWL Lite abstract syntax. The third column points to the
vocabularly element's major definition in the direct semantics of Section 3. The fourth column points to the major piece of the translation from the abstract syntax to triples for the vocabulary
element Section 4. The fifth column points to the vocabularly element's major definition in the RDFS-compatible semantics of Section 5.
The Joint US/EU ad hoc Agent Markup Language Committee developed DAML+OIL, which is the direct precursor to OWL. Many of the ideas in DAML+OIL and thus in OWL are also present in the Ontology
Inference Layer (OIL).
This document is the result of extensive discussions within the Web Ontology Working Group as a whole. The participants in this working group included: Yasser al Safadi, Jean-Francois Baget, James
Barnette, Sean Bechhofer, Jonathan Borden, Frederik Brysse, Stephen Buswell, Jeremy Carroll, Dan Connolly, Peter Crowther, Jonathan Dale, Jos De Roo, David De Roure, Mike Dean, Larry Eshelman, Jerome
Euzenat, Dieter Fensel, Tim Finin, Nicholas Gibbins, Pat Hayes, Jeff Heflin, Ziv Hellman, James Hendler, Bernard Horan, Masahiro Hori, Ian Horrocks, Francesco Iannuzzelli, Mario Jeckle, Ruediger
Klein, Natasha Kravtsova, Ora Lassila, Alexander Maedche, Massimo Marchiori, Deborah McGuinness, Libby Miller, Enrico Motta, Leo Obrst, Laurent Olivry , Peter Patel-Schneider, Martin Pike, Marwan
Sabbouh, Guus Schreiber, Shimizu Noboru, Michael Sintek, Michael Smith, Ned Smith, John Stanton, Lynn Andrea Stein, Herman ter Horst, Lynne R. Thompson, David Trastour, Frank van Harmelen, Raphael
Volz, Evan Wallace, Christopher Welty, Charles White, and John Yanosy. | {"url":"http://www.w3.org/TR/2003/PR-owl-semantics-20031215/semantics-all.html","timestamp":"2014-04-17T13:21:43Z","content_type":null,"content_length":"307935","record_id":"<urn:uuid:d334196b-2bb9-4f93-8b11-1547e1b15e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parallel Genetic Algorithms for Constrained Ordering Problems
Kay Wiese, Sivakumar Nagarajan, and Scott D. Goodwin
This paper proposes two different parallel genetic algorithms (PGAs) for constrained ordering problems. Constrained ordering problems are constraint optimization problems (COPs) for which it is
possible to represent a candidate solution as a permutation of objects. A decoder is used to decode this permutation into an instantiation of the COP variables. Two examples of such constrained
ordering problems are the traveling salesman problem (TSP) and the job shop scheduling problem (JSSP). The first PGA we propose (PGA1) implements a GA using p subpopulations, where p is the number of
processors. This is known as the island model. What is new is that we use a different selection strategy, called keep-best reproduction (KBR) that favours the parent with higher fitness over the
child with lower fitness. Keep-best reproduction has shown better results in the sequential case than the standard selection technique (STDS) of replacing both parents by their two children (Wiese
and Goodwin 1997; 1998a; 1998b). The second PGA (PGA2) is different from PGA1: while it also works with independent subpopulations, each subpopulation uses a different crossover operator. It is not a
priori known which operator performs the best. PGA2 also uses KBR and its subpopulations exchange a percentage q of their fittest individuals every x generations. In addition, whenever this exchange
takes place, the subpopulation with the best average fitness broadcasts a percentage q_2 of its fittest individuals to all other subpopulations. This will ensure that for a particular problem
instance the operator that works best will have an increasing number of offspring sampled in the global population. This design also takes care of the fact that in the early stages of a GA run
different operators can work better than in the later stages. Over time PGA2 will automatically adjust to this new situation.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/FLAIRS/1998/flairs98-019.php","timestamp":"2014-04-16T22:13:24Z","content_type":null,"content_length":"3820","record_id":"<urn:uuid:c48b3f2d-fb54-4244-ac0e-526eb9757f23>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
General C++ Programming -
General C++ Programming - January 2013
write a program using one dimensional array that determines the highest value among the five values ...
Feb 1, 2013 at 4:59am UTC
[no replies]
converting an array of integers to string
Hello Friends, I used to work as research assistant to my professor. I was doing a problem on facil...
Feb 1, 2013 at 12:06am UTC
[no replies]
Help extracting nodes from binary search tree
I was wondering if you could assist me with my binary search tree. I have this code so far and I wan...
Jan 31, 2013 at 8:02pm UTC
[no replies]
function and array!?
Well i not really sure how to use function so here is my question. I got and array and i declared ...
Jan 31, 2013 at 2:01pm UTC
[3 replies] Last: thx! i got it now (by arms4)
January 2013 Pages:
Archived months: [
[jan2013 ]
This is an archived page. To post a new message, go to the
current page | {"url":"http://www.cplusplus.com/forum/general/jan2013/","timestamp":"2014-04-19T12:49:44Z","content_type":null,"content_length":"29591","record_id":"<urn:uuid:2bc6e65f-3230-4203-9955-4fd5f3444c80>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Term structure of interest rates
I came across this question in chapter 4 of Hull 'Options Futures and other Derivatives'. I have the answer but Im not sure what the explanation is. Could anyone help?
The term structure of interest rates is upward sloping. Put the following in order of magnitude :
a) the 5 year zero rate
b) the yield on a 5 year coupon bearing bond
c) The forward rate corresponding to the period between 4.75 and 5 years in the future
The answer is c > a > b, but why? | {"url":"http://www.physicsforums.com/showpost.php?p=3825072&postcount=1","timestamp":"2014-04-19T09:37:06Z","content_type":null,"content_length":"8808","record_id":"<urn:uuid:afba59a4-0c8b-4cd2-84d0-99a410aea08e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clar number of catacondensed benzenoid hydrocarbons
, 801
"... A fullerene graph is a cubic 3-connected plane graph with (exactly 12) pentagonal faces and hexagonal faces. Let Fn be a fullerene graph with n vertices. A set H of mutually disjoint hexagons of
Fn is a sextet pattern if Fn has a perfect matching which alternates on and off each hexagon in H. The ma ..."
Cited by 1 (1 self)
Add to MetaCart
A fullerene graph is a cubic 3-connected plane graph with (exactly 12) pentagonal faces and hexagonal faces. Let Fn be a fullerene graph with n vertices. A set H of mutually disjoint hexagons of Fn
is a sextet pattern if Fn has a perfect matching which alternates on and off each hexagon in H. The maximum cardinality of sextet patterns of Fn is the Clar number of Fn. It was shown that the Clar
number is no more than ⌊n−12 6 ⌋. Many fullerenes with experimental evidence attain the upper bound, for instance, C60 and C70. In this paper, we characterize extremal fullerene graphs whose Clar
numbers equal n−12 6. By the characterization, we show that there are precisely 18 fullerene graphs with 60 vertices, including C60, achieving the maximum Clar number 8 and we construct all these
extremal fullerene graphs.
- DISCRETE MATHEMATICS , 2006
"... The resonance graph R(B) of a benzenoid graph B has the perfect matchings of B as vertices, two perfect matchings being adjacent 13 if their symmetric difference forms the edge set of a hexagon
of B. A family P of pair-wise disjoint hexagons of a benzenoid graph B is resonant in B if B.P contains a ..."
Add to MetaCart
The resonance graph R(B) of a benzenoid graph B has the perfect matchings of B as vertices, two perfect matchings being adjacent 13 if their symmetric difference forms the edge set of a hexagon of B.
A family P of pair-wise disjoint hexagons of a benzenoid graph B is resonant in B if B.P contains at least one perfect matching, or if B.P is empty. It is proven that there exists a surjective map f
15 from the set of hypercubes of R(B) onto the resonant sets of B such that a k-dimensional hypercube is mapped into a resonant set of cardinality k.
, 801
"... Extremal fullerene graphs with the maximum Clar number ∗ ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=12133015","timestamp":"2014-04-16T23:46:52Z","content_type":null,"content_length":"16133","record_id":"<urn:uuid:e91cb133-b3a1-48f6-9af8-edc506f19348>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
July 27th 2010, 01:26 PM #1
Hey everybody, I have this inequality that I was wondering if I did right, so if anybody can verify for me please?
a) $\frac{3x-9}{x^2-1}\leq -1$
I got $(-\infty, -1)\bigcup(1,3)$ again if anybody can verify this?
Never mind I caught a mistake I forgot to get all the numbers to one side and make the inequality equal 0
Last edited by Altami; July 27th 2010 at 01:48 PM.
That is not even close.
Solve the following.
$\dfrac{3x-9}{x^2-1}+1\le 0$
Okay, I corrected the error and now I got
is that right?
Yes that is correct.
Hey thanks.
Okay I really don't want to start another thread so I'll just keep this one but add new questions.
I am having trouble with this one.
A right circular cylinder is inscribed within a right circular cone with height 10in and radius of base 3 inches. Find:
A) The total surface area of the cylinder in terms of the radius of the cylinder.
Now I know the the equation of the surface area of cylinder is; $A=2\pi r^2+2\pi r h$ but I don't know how to make it in terms of the radius?
Please start a new thread for any new question.
That is a rule of this forum.
Oh, okay.
July 27th 2010, 01:53 PM #2
July 27th 2010, 02:05 PM #3
July 27th 2010, 02:11 PM #4
July 27th 2010, 02:19 PM #5
July 27th 2010, 03:13 PM #6
July 27th 2010, 03:20 PM #7
July 27th 2010, 03:23 PM #8 | {"url":"http://mathhelpforum.com/pre-calculus/152139-inequalitites.html","timestamp":"2014-04-18T13:53:57Z","content_type":null,"content_length":"47860","record_id":"<urn:uuid:a0afd1eb-3b6b-47c5-b005-5bd3611c44f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conditional Statements
Logical Relationships Between Conditional Statements: The Converse, Inverse, and Contrapositive
Every Great American City Has At Least One College.
Worcester Has Ten. Highway billboard in Worcester, MA
This billboard advertisement plays on the fact that people, in both daily life and within mathematics classes, tend to treat related, but logically distinct, conditional statements as equivalent.
What is the advertisement trying to suggest? Is it supplying the necessary evidence?
A conditional statement is one that can be put in the form if A, then B where A is called the premise (or antecedent) and B is called the conclusion (or consequent). We can convert the above
statement into this standard form: If an American city is great, then it has at least one college. The advertisers then share with us that Worcester has ten colleges, that is, that it satisfies the
conclusion of the statement. They hope that we will then conclude that Worcester is great. Leaving aside the truth of that conclusion, it is not a logical deduction. Just because a premise implies a
conclusion, that does not mean that the converse statement, if B, then A, must also be true. We do not need to accept the statement, "if an American city has at least one college, then it is great."
Why do we often fall into this trap (known as a converse error)? Because, everyday comments often carry an unstated second meaning. If we say "If it is raining, then I carry my umbrella," then people
have some reason to assume the converse statement as well as the inverse (if not A, then not B), "if it is not raining, then I do not carry my umbrella." If these were not the case, you might just as
well have said, "I carry my umbrella all of the time!" However, the truth of the inverse and converse of a statement are logically unrelated to the truth of the initial statement. Consider the true
mathematical statement, "if a figure is a square, then it is a rectangle," which has false converse and inverse statements.
The Euler Diagram below represents the statement if A, then B. All of the points within the inner circle match the premise A. Because the circles are nested, those same points are within circle B
(they possess the property associated with B). Notice that there are points in circle B that are not inside of circle A. Therefore, the converse statement, if B, then A, does not have to be true. If
the converse were true, then circle B would need to be contained within A as well and the two circles would have to be identical. The same points that show that the converse might be false, also show
that the inverse is suspect. There might be examples which do not have property A, but which do have property B so if not A, then not B is not dependable.
The Euler Diagram for "if A, then B"
The Euler Diagram for "if a figure is a triangle, then it is a polygon"
A third transformation of a conditional statement is the contrapositive, if not B, then not A. The contrapositive does have the same truth value as its source statement. The Euler diagram illustrates
why the contrapositive is equivalent to the original statement. Because circle A is wholly within circle B, points outside of circle B (not B) must be outside of circle A (not A) as well. So not B
implies not A. Note that the inverse and converse of the same statement are each other s contrapositive and are, therefore, both true or both false conditional statements.
The equivalence of a statement and its contrapositive is at the heart of the method of proof by contradiction, which proves that the contrapositive of a conjecture is true and, therefore, that the
original conjecture is true.
The first chapter of Harold Jacobs s (1987) Geometry, 2nd Edition published by W. H. Freeman has a thorough discussion of these ideas with many good exercises. | {"url":"http://www2.edc.org/makingmath/mathtools/conditional/conditional.asp","timestamp":"2014-04-18T23:15:47Z","content_type":null,"content_length":"11326","record_id":"<urn:uuid:131fdeed-e975-45ee-8e78-92a79d0b1cb9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00516-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: A practical alternative of the method of random perumtation of Fisher
& Yates
Replies: 1 Last Post: Jun 24, 2013 3:30 AM
Messages: [ Previous | Next ]
Re: A practical alternative of the method of random perumtation of
Fisher & Yates
Posted: Jun 24, 2013 3:30 AM
Sorry for a tiny but confusing typo. The header of the first table in my OP:
n=20 n=52 n=100
H-D shuffle() F&Y H-D shuffle() F&Y H-D shuffle()
F&Y Expected
should read:
n=20 n=52 n=100
H-D shuffle() F&Y H-D shuffle() F&Y H-D shuffle()
F&Y Expected
M. K. Shen
Date Subject Author
6/22/13 A practical alternative of the method of random perumtation of Fisher Mok-Kong Shen
& Yates
6/24/13 Re: A practical alternative of the method of random perumtation of Mok-Kong Shen
Fisher & Yates | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2577620&messageID=9143574","timestamp":"2014-04-20T06:30:24Z","content_type":null,"content_length":"17643","record_id":"<urn:uuid:c8a44234-8de1-4623-8611-dd6341eb8a2e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heretical Ideas that Provided the Cornerstone for the Standard Model of Particle Physics
After being wrong twice, I knew that I needed help, and enlisted the aid of Hagen and Kibble. By the end of April, we had all of the essential results (including the now famous boson) of our "GHK
paper" that was received by Physical Review Letters on October 12, 1964. The only thing missing was the final consistency check given in the last equation in our paper. This paper presents the above
general result, namely that the Nambu-Goldstone theorem cannot be applied to physical particles described by gauge theories. It also introduces spontaneously-broken scalar electrodynamics as a new
specific model of this phenomenon, and examines the leading-order solution. As expected, it has no resemblance to solutions in coupling-constant perturbation theory. The massless photon, after
absorbing one degree of freedom of the charged scalar boson, is replaced by a massive particle of unit spin. This leaves a neutral scalar boson, now known as the "Higgs" boson.
It took us so long to publish our work, because none of the many physicists we approached believed our very strange results. My painful history of errors combined with the fact that we ended up with
the exact opposite result to that which I had originally expected, was adequate justification for caution. Indeed, even after publication, Heisenberg made it clear to me at his conference at
Feldafing (1965) that he thought our work was wrong. The many seminars I gave on our ideas were generally met with (mostly polite) skepticism.
Just as our paper was about to be put into the mail, we were surprised by the arrival of two preprints containing related work by Englert and Brout (EB) and also Higgs (H). While these addressed the
same problem, we felt that they missed many crucial points. Neither paper raised the fundamental point of the relevance of the Nambu-Goldstone theorem to only gauge modes, and certainly not to
approximation-independent results. Both the EB and H papers examine leading-order broken scalar electrodynamics in manifestly covariant form. Such solutions must have massless Nambu-Goldstone
particles, despite that our work showed that these cannot correspond to physical solutions. EB did not do a complete study, and entirely missed the "Higgs Boson", and made only a weak remark about
the Nambu-Goldstone massless particle. Higgs only considered a classical solution, and ignored the massless Nambu-Goldstone boson contained in his equations. This massless boson is absolutely
required by quantum mechanics, and by overlooking this Higgs neglected to address the formidable problem presented by the Nambu-Goldstone theorem.
Stimulated by the recent experimental developments, discussions about these old papers have resumed. It has been observed that, in the GHK paper, the mass of the "Higgs" boson is zero, while it is
not zero in Higgs’ paper. In fact, the mass given in the H paper can take on any value (including zero), as it depends on undetermined parameters. Moreover, the mass in the H and GHK papers has
nothing to do with the physical mass of the "Higgs" boson. This is because only the leading-order approximation is considered, and higher-order iterations produce divergences which, even after
renormalization, leave the physical mass an undetermined parameter that can only be set by experiment. The theory puts absolutely no constraint on this mass! This is why there was no direct guidance
of where experimenters should look for the particle. The GHK and the H papers give different leading-order masses because the first-order approximations are different due to the implementation of
different - but equivalent - renormalization philosophies. As the order of iteration increases, the renormalized Greens functions of the two different approaches converge.
There are two additional papers that complete the five 1964 series of papers discussed above. The paper that I gave at the conference where I had the discussion with Heisenberg mentioned above was
published in the Proceedings of seminar of unified theories of elementary particles, July 1965. This paper contains a significant extension in detail of the GHK paper. It has recently been
republished in Mod. Phys. Lett. A26 (2011) 1381-1392 and posted as an eprint in arXiv:1107.4592. Peter Higgs in 1965 submitted a paper to Physical Review that has much in common with my paper
mentioned above and indeed, he thanks me for conversations. He adds one element by displaying calculations of tree graphs which contribute to the next order of correction to the 1964 PRL model.
It is worth noting that the discussion of the above mechanisms can be generalized to the full standard model, since the non-abelian analysis is not different in any fundamental way from the abelian
case. Along this line another relevant paper by Tom Kibble was published in 1967 in Physical Review specifically applying the arguments of the 1964 GHK paper to non-abelian gauge theories.
Only after the work by Weinberg and Salam proposed a unified electroweak theory, did any of these papers receive serious attention. If the particle described on July 4^th turns out to be the "Higgs
Boson", as seems very likely, the major missing part of the standard model will be in place, and our work will be verified to be far more than the questionable mathematical exercise that it was
initially thought to be.
For readers interested in more of the sociology behind this work as well as the detailed mathematics and extensive explicit referencing, I suggest reading my historical review:
"The History of the Guralnik, Hagen and Kibble development of the Theory of Spontaneous Symmetry Breaking and Gauge Particles." e-Print: arXiv:0907.3466 [physics.hist-ph], Int.J.Mod.Phys.
24:2601-2627, 2009.
[Released: March 2013] | {"url":"http://www.sps.ch/en/articles/milestones_in_physics/heretical_ideas_that_provided_the_cornerstone_for_the_standard_model_of_particle_physics_1/","timestamp":"2014-04-17T22:31:02Z","content_type":null,"content_length":"28511","record_id":"<urn:uuid:b432414e-a800-4cfb-b410-25599a762591>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00293-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intuitive Semantics for First-Degree Entailments and
, 2000
"... Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1 ..."
Cited by 139 (16 self)
Add to MetaCart
Abstract: This is a history of relevant and substructural logics, written for the Handbook of the History and Philosophy of Logic, edited by Dov Gabbay and John Woods. 1 1
- Artificial Intelligence , 1990
"... We introduce a new approach to dealing with the well-known logical omniscience problem in epistemic logic. Instead of taking possible worlds where each world is a model of classical
propositional logic, we take possible worlds which are models of a nonstandard propositional logic we call NPL, which ..."
Cited by 50 (4 self)
Add to MetaCart
We introduce a new approach to dealing with the well-known logical omniscience problem in epistemic logic. Instead of taking possible worlds where each world is a model of classical propositional
logic, we take possible worlds which are models of a nonstandard propositional logic we call NPL, which is somewhat related to relevance logic. This approach gives new insights into the logic of
implicit and explicit'belief considered by Levesque and Lakemeyer. In particular, we show that in a precise sense agents in the structures considered by Levesque and Lakemeyer are perfect reasoners
in NPL. 1
- To appear, Special Logic issue of the Australasian Journal of Philosophy , 2000
"... Abstract: A widespread assumption in contemporary philosophy of logic is that there is one true logic, that there is one and only one correct answer as to whether a given argument is deductively
valid. In this paper we propose an alternative view, logical pluralism. According to logical pluralism th ..."
Cited by 23 (5 self)
Add to MetaCart
Abstract: A widespread assumption in contemporary philosophy of logic is that there is one true logic, that there is one and only one correct answer as to whether a given argument is deductively
valid. In this paper we propose an alternative view, logical pluralism. According to logical pluralism there is not one true logic; there are many. There is not always a single answer to the question
“is this argument valid?” 1 Logic, Logics and Consequence Anyone acquainted with contemporary Logic knows that there are many so-called logics. 1 But are these logics rightly so-called? Are any of
the menagerie of non-classical logics, such as relevant logics, intuitionistic logic, paraconsistent logics or quantum logics, as deserving of the title ‘logic ’ as classical logic? On the other
hand, is classical logic really as deserving of the title ‘logic ’ as relevant logic (or any of the other non-classical logics)? If so, why so? If not, why not? Logic has a chief subject matter:
Logical Consequence. The chief aim of
, 2000
"... In this paper, I distinguish different kinds of pluralism about logical consequence. In particular, I distinguish the pluralism about logic arising from Carnap's Principle of Tolerance from a
pluralism which maintains that there are different, equally "good" logical consequence relations on the one ..."
Cited by 1 (0 self)
Add to MetaCart
In this paper, I distinguish different kinds of pluralism about logical consequence. In particular, I distinguish the pluralism about logic arising from Carnap's Principle of Tolerance from a
pluralism which maintains that there are different, equally "good" logical consequence relations on the one language. I will argue that this second form of pluralism does more justice to the
contemporary state of logical theory and practice than does Carnap's more moderate pluralism.
- National University , 1995
"... In this paper we consider the implications for belief revision of weakening the logic under which belief sets are taken to be closed. A widely held view is that the usual belief revision
functions are highly classical, especially in being driven by consistency. We show that, on the contrary, the ..."
Add to MetaCart
In this paper we consider the implications for belief revision of weakening the logic under which belief sets are taken to be closed. A widely held view is that the usual belief revision functions
are highly classical, especially in being driven by consistency. We show that, on the contrary, the standard representation theorems still hold for paraconsistent belief revision. Then we give
conditions under which consistency is preserved by revisions, and we show that this modelling allows for the gradual revision of inconsistency. 1 Realistic Logics Belief Revision is a rich and
diverse field. The unit of study is most often a belief set --- a set K of sentences (propositions, whatever) closed under a consequence relation. In Gardenfors' canonical text [6], and in nearly all
other studies of belief revision, the notion of consequence is taken to be superclassical. Consequence at least includes classical propositional consequence. This is a theoretical simplification.
No-one believ...
, 1998
"... ... In this paper I give a consistency proof, by providing a model for the theses of truthmaking in my earlier paper. This result does two things. Firstly, it shows that the theses of
truthmaking are jointly consistent. Secondly, it provides an independently philosophically motivated formal model fo ..."
Add to MetaCart
... In this paper I give a consistency proof, by providing a model for the theses of truthmaking in my earlier paper. This result does two things. Firstly, it shows that the theses of truthmaking are
jointly consistent. Secondly, it provides an independently philosophically motivated formal model for relevant logics in the `possible worlds' tradition of Routley and Meyer [8, 16, 17]. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2021819","timestamp":"2014-04-18T19:41:05Z","content_type":null,"content_length":"24819","record_id":"<urn:uuid:a4d38be9-086e-4976-a766-cf0293f0801f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
-connected subgraphs in protein interaction network
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Syst Biol. 2010; 4: 129.
Protein complex prediction based on k-connected subgraphs in protein interaction network
Protein complexes play an important role in cellular mechanisms. Recently, several methods have been presented to predict protein complexes in a protein interaction network. In these methods, a
protein complex is predicted as a dense subgraph of protein interactions. However, interactions data are incomplete and a protein complex does not have to be a complete or dense subgraph.
We propose a more appropriate protein complex prediction method, CFA, that is based on connectivity number on subgraphs. We evaluate CFA using several protein interaction networks on reference
protein complexes in two benchmark data sets (MIPS and Aloy), containing 1142 and 61 known complexes respectively. We compare CFA to some existing protein complex prediction methods (CMC, MCL, PCP
and RNSC) in terms of recall and precision. We show that CFA predicts more complexes correctly at a competitive level of precision.
Many real complexes with different connectivity level in protein interaction network can be predicted based on connectivity number. Our CFA program and results are freely available from http://
Several groups have produced a large amount of data on protein interactions [1-9]. It is desirable to use this wealth of data to predict protein complexes. Several methods have been applied to
protein inter-actome graphs to detect highly connected subgraphs and predict them as protein complexes [10-25]. The main criterion used for protein complex prediction is cliques or dense subgraphs.
Spirin and Mirny proposed the clique-finding and super-paramagnetic clustering with Monte Carlo optimization approach to find clusters of proteins [10]. Another method is Molecular Complex Detection
(MCODE) [11], which starts with vertex weighting and finds dense regions according to given parameters. On the other hand, the Markov CLuster algorithm (MCL) [26,27] simulates a flow on the network
by using properties of the adjacency matrix. MCL partitions the graph by discriminating strong and weak flows in the graph. The next algorithm is RNSC (Restricted Neighborhood Search Clustering) [13
]. It is a cost-based local search algorithm that explores the solution space to minimize a cost function, which is calculated based on the numbers of intra-cluster and inter-cluster edges.
However, many biological data sources contain noise and do not contain complete information due to limitations of experiments. Recently, some computational methods have estimated the reliability of
individual interaction based on the topology of the protein interaction network (PPI network) [23,28,29]. The Protein Complex Prediction method (PCP) [30] uses indirect interactions and topological
weight to augment protein-protein interactions, as well as to remove interactions with weights below a threshold. PCP employs clique finding on the modified PPI network, retaining the benefits of
clique-based approaches. Liu et al. [31] proposed an iterative score method to assess the reliability of protein interactions and to predict new interactions. They then developed the Clustering based
on Maximal Clique algorithm (CMC) that uses maximal cliques to discover complexes from weighted PPI networks.
Following these past works, we model the PPI network with a graph, where vertices represent proteins and edges represent interactions between proteins. We present a new algorithm CFA--short for k
-Connected Finding Algorithm--to find protein complexes from this graph. Our algorithm is based on finding maximal k-connected subgraphs. The union of all maximal k-connected subgraphs (k ≥ 1) forms
the set of candidate protein clusters. These candidate clusters are then filtered to remove (i) clusters having less than four proteins and (ii) clusters having a large diameter. We compare the
results of our algorithm with the results of MCL, RNSC, PCP and CMC. Our algorithm produces results that are comparable or better than these existing algorithms on real complexes of [32,33].
Generally, a complete or a dense subgraph of a protein interaction network is proposed to be a protein complex. But there are many complexes which have different topology and density (see Figure
Figure1).1). So we need to define a criterion to predict protein complexes with different topology.
Connectivity of two known complexes. Part (A) contains two known complexes reported by MIPS (MIPS ID: 510.40.10 and 550.1.213). In complex 1, except for one vertex, there are at least two independent
paths between every two proteins. In complex 2, except ...
Interaction Graphs
A PPI network is considered as an undirected graph G = V, Ev V represents a protein in the network and each edge uv E represents an observed interaction between proteins u and v. Two vertices u and v
of G are adjacent or neighbors if and only if uv is an edge of G. The degree d(v) of a vertex v is defined as the number of neighbors that the protein v has.
The density of a graph G = V, E
If all the vertices of G are pairwise adjacent, then G is a complete graph and D[G ]= 1. A complete graph on n vertices is denoted by K[n]. The cluster score of G is defined as D[G ]× |V|.
A path in a non-empty graph G = V, Eu and v is a sequence of distinct vertices u = v[0], v[1], ..., v[k ]= v such that v[i]v[i+1 ]E, 0 ≤ i < k - 1. G is called connected if every two vertices of G
are linked by a path in G. G is called k-connected (for k V| > k and the graph G = V - X, E - (X × X)X V with |X| < k. The distance d(u, v) is the shortest path in G between two vertices u and v. The
greatest distance between any two vertices in G is the diameter of G denoted by diamG. A non-empty 1-connected subgraph with the minimum number of edges is called a tree. It is well known that a
connected graph is a tree if and only if the number of edges of the graph is one less than the number of its vertices. It is a classic result of graph theory-- the global version of Menger's theorem
[34]--that a graph is k-connected if any two of its vertices can be joined by k independent paths (two paths are independent if they only intersect in their ends).
Results and Discussion
Data Sets
Protein-Protein Interaction Network Data
In this work, we use two high-throughput protein-protein interaction (PPI) data collections. The first data collection, GRID, contains six protein interaction networks from the Saccharomyces
cerevisiae (bakers' yeast) genome. These include two-hybrid interactions from Uetz et al. [2] and Ito et al. [3], as well as interactions characterized by mass spectrometry technique from Ho, Gavin,
Krogan and their colleagues [6-9]. We refer to these data sets as PPI[Uetz], PPI[Ito], PPI[Ho], PPI[Gavin2], PPI[Gavin6], and PPI[Krogan].
The other data collection is obtained from BioGRID [35]. This data collection includes interactions obtained by several techniques. We only consider interactions derived from mass spectrometry and
two-hybrid experiments as these represent physical interactions and co-complexed proteins. We refer to this data set as PPI[BioGRID]. Some descriptive statistics of each protein interaction network
are presented in Table Table11.
Summary statistics of each data set.
Protein Complex Data
Two reference sets of protein complexes are used in our work. The first data set was gathered by Aloy et al. [32] and the other was released in the Munich Information Center for Protein Sequences
(MIPS) [33] at the time of this work (September 2009). We refer to the two protein complex data sets as APC (Aloy Protein Complex) and MPC (MIPS Protein Complex), respectively. Details of these data
sets are described in Table Table2.2. During validation, those proteins which cannot be found in the input interaction network are removed from the complex data.
Summary statistics of each protein complex data sets for each PPI network.
Cellular Component Annotation
The level of noise in protein interaction data--especially those obtained by two-hybrid experiments--has been estimated to be as high as 50% [36-38]. Liu et al. [31] have shown that using a de-noised
protein interaction network as input leads to better quality of protein complex predictions by existing methods. A protein complex can only be formed if its proteins are localized within the same
component of the cell. So we use localization coherence of proteins to clean up the input protein interaction network. We use cellular component terms from Gene Ontology (GO) [39] to evaluate
localization coherence. We find that among the 5040 yeast proteins, only 4345 or 86% of them are annotated. To avoid arriving at misleading conclusions caused by biases in the annotations, we use the
concept of informative cellular component. We define a cellular component annotation as informative if it has at least k proteins annotated with it and each of its descendent GO terms has less than k
proteins annotated with it. In this work, we set k as 10. This yields 150 informative cellular component GO terms on the BioGRID data set.
Performance Evaluation Measures
There are many studies that predict protein complexes. To evaluate the performance of various protein complex prediction methods, we compare the predicted protein complexes with real protein complex
data sets, APC and MPC.
To compare the clusters--i.e., predicted protein complexes--found by different algorithms to real protein complexes, we use a measure based on the fraction of proteins in the predicted cluster that
overlaps with the known complex. Let S be a predicted cluster and C be a reference complex, with size |S| and |C| respectively. The matching score between S and C is defined by
If Overlap(S,C) meets or exceeds a threshold θ, then we say S and C match. Following Liu et al. [31], we use an overlap threshold of 0.5 to determine a match.
Given a set of reference complexes C = {C[1], C[2], ...., C[n]}and a set of predicted complexes S = {S[1],S[2], ..., S[m]}, precision and recall at the whole-complex level are defined as follows:
$Prec=|{Si∈S|∃Cj∈C, Overlap(Si,Cj)≥θ}||S|Recall =|{Ci∈C|∃Sj∈S, Overlap(Sj,Ci)≥θ}||C|$
The precision and recall are two numbers between 0 and 1. They are the commonly used measures to evaluate the performance of protein complex prediction methods [30,31]. In particular, precision
corresponds to the fraction of predicted clusters that matches real protein complexes; and recall corresponds to the fraction of real protein complexes that are matched by predicted clusters.
Another measure which can be used to evaluate the performance of a method is F-measure. According to [40], this measure was first introduced by Rijsbergen [41]. They defined F-measure as the harmonic
mean of precision and recall:
To justify using the connectivity definition and cellular component annotation, we analyze the connectivity number and localization coherence of reference complexes of MPC on PPI networks obtained by
[6-9] as well as [35].
Co-Localization Score of Known Complexes
A protein complex is a set of proteins that interact with each other at the same time and place, forming a single multimolecular machine [10]. This biological definition of a protein complex helps us
predict protein complexes. Using the information of cellular component annotation existing in GO, Liu et al. [31] define a localization group as the set of proteins annotated with a common
informative cellular component GO annotation. They then define the co-localization score of the complex, c, as the maximum number of proteins in the complex that are in the same localization group,
max{c ∩ L[i ]| i = 1, ...,k}, divided by the number of those proteins in c with localization annotations, |{p c|L[i ]L, p L[i]}|, where L = {L[1], ..., L[k ]}is a set of localization groups. More
formally, the co-localization score of a set of complexes C is the weighted average score over all complexes:
The locscore for MPC and APC are 0.74 and 0.86 respectively. The relatively large values of these numbers suggest that cleaning the input PPI network by cellular component information should help us
improve precision and recall of existing algorithms.
Impact of Localization Information
In this work, the cleaning of PPI networks using informative cellular component GO terms is an important preprocessing step. So we analyze here the impact of using informative GO cellular component
annotation on the performance of four existing algorithms--CMC, MCL, PCP, and RNSC-- on their standard parameters. (The CMC package comes with its own PPI-cleaning method. However, in order to
observe the effect of cleaning based on cellular component GO terms on CMC, this method is not used in this work.)
Let G[i ]= G[L[i]] be the induced subgraph of G generated by the vertex set L[i], where {L[1], L[2], ..., L[k]} is the set of localization groups. Thus each L[i ]contains a set of proteins localized
to the same cellular component--i.e., they are annotated by the same informative GO term. Let C[i ]be the set of all clusters predicted by an algorithm on $Gi⋅ CL=∪i=1kCi$ denotes the set of all
clusters predicted by the algorithm on G.
To evaluate the impact of localization information, we compare the precision and recall of C[L ]and clusters generated on the original PPI network G. Table Table33 summarizes some general features
of clusters predicted by the algorithms mentioned. We observe that, by using protein cellular component annotations, the number of predicted clusters generally increases, while the average cluster
size decreases. We further observe that the average size of clusters predicted by MCL and CMC algorithms are larger than those predicted by others. We also compare the precision and recall of the
clusters predicted by the four algorithms. We find that generally the precision and recall values have significant improvements in C[L].
Features of clusters predicted by different algorithms on the both the original and C[L ]networks.
The precision and recall values obtained at the matching threshold θ = 0.5 are given in Table Table3.3. RNSC performs best on PPI[Biogrid], while MCL performs best on PPI[Gavin6], PPI[Gavin2], and
PPI[Ho]. In the orginal network of PPI[krogan], PCP shows better precision against recall compared to other methods, while after cleaning by using localization information almost all methods have
similar performance. This table shows that none of these algorithms has the best precision vs recall in all networks.
We present two illustrative examples in Figure Figure2.2. The first example (Figure 2(A)) is the unmatched cluster predicted by CMC on the original network of PPI[Gavin2]. This cluster contains a
four-member protein complex with specific GO cellular component annotation (GO.0005956; protein kinase CK2 complex). The other seven proteins in the CMC cluster belong to other localization groups.
This cluster is refined in C[L ]to match well with the same real complex. In Figure 2(B), PCP predicts a sevenmember cluster matched to a complex of MPC using the localization annotation on PPI
[Krogan]. In contrast, only four proteins in this complex are matched to the corresponding PCP cluster predicted on the original network.
Examples of the clusters predicted on the original and the C[L]. networks. Part (A) illustrates the impact of using informative cellular component GO term annotations on the performance of CMC. CMC
predicts the unmatched cluster on the original network. ...
Density of Known Complexes
We consider the density of known complexes with size at least three for each PPI network. Figure Figure33 shows that algorithms based on graph density cannot predict a large number of known
complexes, and recall values of these algorithms are destined to be limited. For example, there are 11 complexes among 827 known complexes with D[G ]= 0 and 41 complexes with density value less than
0.1 in PPI[BioGRID]. Similarly, there exist 200 complexes among 551 known complexes with density value less than 0.1 in PPI[Gavin2].
The frequency distribution of known protein complexes having various density of protein interactions within them.
Furthermore, almost all complexes which are complete or have high density are of the form K[3], while there are a large number of cliques of size 3 which are not complex. For example, in PPI[BioGRID]
, there exist 176 known complexes of size three, while the number of cliques of size 3 in PPI[BioGRID ]is 37230. It means that only about 0.47% of them are known real complexes. So, those clusters
and complexes with size atmost three are removed in our work, to avoid an excessive number of false positive predictions.
We have also studied the number of known complexes of size four in PPI[BioGRID]. We find that there exist 138 real complexes of size four, while only 54 of them have high density.
The discussions above suggest that the density criterion alone cannot answer the question of finding complexes. We need to introduce another criterion to overcome this problem.
Connectivity of Known Complexes
We show in this section that connectivity is a reasonable alternative criterion for identifying protein complexes. Although this criterion is simple, it may directly describe the general
understanding of the protein complex concept. This criterion is better than density because, while there are a lot of known complexes that are not complete or dense, there are many k-connected
subgraphs with low density. For example, Figure 1(A) shows two real complexes of MPC with low density (0.34). Both of them have a large 2-connected subgraph.
Similar to the definition of locscore, we define kscore of a set of complexes, C, as follows;
$kscore(C)=∑c∈Cmax{|sik(c)| |i=1,…,n}∑c∈C|{p∈c|∃q∈c,pq∈E}|$
where $s1k(c), s2k(c), ..., snk(c)$ are maximal k- connected subgraphs of complex c.
In Table Table4,4, the kscore and average density of different PPI networks on MPC are shown. The average density of the set of real complexes are usually low. On the other hand, on average, 99.5%
of proteins of each real complex are located in 1-connected subgraphs. Also 78.4%, 53.7% and 37.4% of proteins of each real complex are located in 2-connected, 3- connected, and 4-connected subgraphs
respectively. By increasing the connectivity number, this average decreases but there exist some proteins which are located in a subset of a real complex with high k- connectivity.
The kscore and average density of different PPI networks on MPC.
This suggests that using connectivity number as a criterion of protein complex prediction may be a good approach. Therefore, our algorithm is based on finding maximal k-connected subgraphs in PPI
networks by keep increasing k until k cannot be increased any more. In other words, the algorithm continues until some integer k[0 ]such that there is no k-connected subgraph with k > k[0].
Testing for Accuracy
To check the validity of CFA, we compare clusters predicted by CFA with the clusters obtained by CMC, MCL, PCP and RNSC, on the seven protein interaction networks of GRID and BioGRID. The networks
are first segregated by informative cellular component GO terms before these algorithms are run. MPC and APC are used as benchmark real protein complexes.
In PPI[Uetz], none of the algorithms could produce any cluster matched by real complexes in MPC and APC. PPI[U etz ]is a difficult example because, as can be seen in Table Table1,1, it is a much
sparser and much more incomplete network compared to the other PPI networks. So in Table Table5,5, we present the number of matched clusters and matched complexes predicted by the clustering methods
on the other six PPI networks.
Precision and recall values of different algorithms on each PPI network.
Table Table55 shows that CFA performs better on PPI[Krogan], PPI[Ito], PPI[Gavin2 ]and PPI[Gavin6 ]compared to other methods. In fact, both precision and recall values of CFA are greater than all of
the other algorithms in these networks. In PPI[Ho], RNSC has the greatest precision. However, RNSC predicts merely 26 clusters and, among these predictions, 13 clusters are matched to 5 real
complexes in APC and 19 clusters are matched to 21 real complexes in MPC. Thus the recall value of RNSC is very low (0.166 on APC and 0.038 on MPC). In contrast, CFA correctly predicts 13 real
complexes of APC and 62 of MPC. The clusters of CFA give the precision value 0.416 (0.166) and the recall value 0.114 (0.433) on MPC (APC), which are generally better than that obtained by RNSC and
other methods on PPI[Ho].
We also study the number of matched clusters and matched complexes of predictions on PPI[Biogrid]. We find that almost all algorithms predict the same number of real complexes in APC. However, CFA
matches a lot more complexes in MPC than CMC (18% more), MCL (5% more), PCP (15% more) and RNSC (17% more). Furthermore, this significant superiority of CFA in recall comes with the highest precision
value in MPC. The overall precision of CFA on the combined APC and MPC complexes, as can be computed from Table Table6,6, is 0.492, which is comparable to CMC (0.422), PCP (0.411), and RNSC (0.502),
and is superior to MCL (0.274).
Detailed breakdown of predicted clusters by different algorithms with respect to APC and MPC reference protein complexes.
We find that all complexes predicted by CMC and RNSC are identified by at least one of the other three algorithms. To compare real complexes predicted by CFA, MCL and PCP, Figure Figure44 shows a
Venn diagram of complexes predicted by these algorithms on the combined set of APC and MPC complexes. It shows that CFA predicts maximum number of real complexes that MCL and PCP cannot predict. So
CFA is finding a different group of complexes from other methods.
The Venn diagram of matched complexes. A Venn diagram of the combined set of complexes in APC and MPC that are correctly predicted by CFA, CMC and RNSC based on PPI[Biogrid ]network.
Some interactions in PPI[Biogrid ]are derived from two-hybrid technique. Due to the level of noise in two-hybrid experiments, we expect those predicted clusters having the form of a tree structure to
have lower reliability compared to other 1-connected subgraphs. Hence, in order to improve the results of CFA, we only use 1-connected subgraphs that are not trees. A tree with n vertices has n - 1
edges; so a connected cluster is a tree if and only if its cluster score is 2. Thus, we consider 1-connected subgraphs with cluster scores greater than 2. Similarly, we can do additional filtering
for each k-connected subgraphs by considering the clusters with cluster score greater that k+1. The precision and recall values of the resulting further refined clusters are 0.465 and 0.178 in MPC
and 0.347 and 0.838 in APC. So the precision vs recall of CFA, using cluster score filtering, shows significant improvement compared to other methods in PPI[Biogrid ]on APC too.
On the other hand, we observe that some predicted clusters have large overlap with each other. That is, we have some clusters S[i ]and S[j ]such that Overlap(S[i, ]S[j]) ≥ α. To get a more concise
understanding of CFA and the other prediction methods, we also clean up the set of predictions by removing redundant clusters. In the other words, when two predicted clusters show an overlap score
above the threshold value (of α = 0.5), we keep the larger one. The precision and recall values after this additional cleaning of the set of predictions are given in Table Table7.7. Table Table77
shows that, generally, CFA identifies the most number of complexes based on nonredundant predicted clusters on each PPI network.
Precision and recall values after removing highly overlapping clusters.
Examples of Predicted Clusters
In this section, we present five matched and unmatched clusters predicted by CFA.
In Figure 1(A), two MIPS complexes, marked as 1 and 2, are depicted according to the protein interactions of PPI[Gavin2]. Complex 1 is an eleven- member complex (MIPS ID. 550.1.213; Probably
transcription DNA Maintanace Chromatin Structure) that contains a protein, Y NL113W, whose interactions with other proteins are missing from PPI[Gavin2]. Complex 2 contains 12 proteins (MIPS ID.
510.40.10; RNA polymerase II ) and there exists a protein, Y LR418C, in this complex whose interactions with other proteins are missing in PPI[Gavin2]. There are four common proteins in these two
complexes. Without considering localization annotations, CFA predicts all vertices of this graph (except for Y LR418C and Y NL113W) as a 2-connected subgraph. After segregating the network using GO
terms, CFA predicts two clusters (Figure 1(B)) which are matched to the real complexes in Figure 1(A).
In Figure Figure5,5, we show three matched and unmatched clusters. The first cluster contains 30 proteins from PPI[Gavin6]. The cluster is perfectly matched to a complex in MPC of size 30. The
density in this complex is 0.2, so it can be considered as a non-dense real complex. The second cluster is a nineteen-member cluster from PPI[Krogan]. This cluster contains a known complex in MPC of
size 18 proteins with specific GO annotation (GO: 0006511; ubiquitin-dependent protein catabolic process). The one additional protein (YDR363W-A) predicted by CFA to be in this cluster turns out to
have the same biological process GO term annotation. We think that with more accurate experimental data, this 19th protein may also be a protein of this complex. The smallest cluster in our samples
contains six proteins that are predicted by CFA in PPI[BioGRID]. The cluster members have the same specific GO annotation (GO: 0015031; protein transport), though this cluster is not presented as a
known complex in MPC and APC.
Examples of matched and unmatched clusters. Examples of matched (cluster 1 and 2) predicted clusters by CFA with different density. And an example of unmatched cluster predicted by CFA which contains
proteins having the same specific GO annotation (GO: ...
To gain further insights into the differences among CFA's clusters and clusters predicted by other algorithms, we consider the first CFA cluster presented in Figure Figure5.5. This cluster is
matched perfectly to a 30-member complex on MPC. In contrast, CMC's clusters only overlap with at most 16 members of this complex. The corresponding cluster predicted by PCP is a twenty five-member
cluster, and the other members of the real complex do not belong to the PCP cluster. Similarly, merely fifteen members of the corresponding RNSC cluster overlap with the same complex. Among these
methods only MCL predicts a cluster which is matched to the same complex perfectly.
The third cluster shown in Figure Figure55 is an unmatched cluster which is obtained by CFA, CMC, PCP and RNSC algorithms. None of the proteins of this cluster belongs to any real complex in MPC and
APC. However, MCL predicts a cluster containing all members of the above mentioned cluster with an extra protein with a different GO term annotation.
In the first part of this work, we study the impact of using informative cellular component GO term annotations on the performance of several different protein complex prediction algorithms. We have
shown (Table (Table3)3) that existing algorithms predict protein complexes with significantly higher precision and recall when the input PPI network is cleansed using informative cellular component
GO term annotations. Therefore, we propose for protein complex prediction algorithms a preprocessing step where the input PPI network is segregated by informative cellular component GO terms.
In the second part of this work, we study the density of protein interactions within protein complexes. We have shown (Figure (Figure3)3) that there are many real complexes with different density.
So density is not a good criterion for prediction of protein complexes. Therefore, we look at the connectivity number of complexes as a possible alternative criterion. We observe (Table (Table4)4)
that 87%-99% of real protein complexes are 1-connected, 68%-87% are 2-connected, 35%-54% are 3-connected, and 23%-37% are 4-connected.
So in the third part of this work, we propose the CFA algorithm to predict protein complexes based on finding k-connected subgraphs on an input PPI network that has been seggregated according to
informative cellular component GO term annotations on its proteins. Table Table88 shows the precision and recall of maximal k-connected subgraphs on different PPI networks using MPC complexes as
reference protein complexes. It can be seen that, by increasing the connectivity number of subgraphs, precision values show significant improvement compared to subgraphs with low connectivity
numbers. However, the recall values decrease, due to a decrease in the number of predicted subgraphs. We have found that combining the k-connected subgraphs for various values of k as our set of
predicted protein complexes yields the best precision vs recall performance. This combined set constitutes the predicted clusters output by CFA.
Precision and recall values of maximal k-connected (k ≥ 1) subgraphs, C1, C2, ..., C9, and their union U.
Finally, we compare the performance of CFA to several state-of-the-art protein complex prediction methods. We have shown (Table (Table5)5) that CFA performs better than other methods for most test
cases. For example, in the largest network in our test sets (PPI[Biogrid]), the number of complexes predicted by RNSC is very low compared to CFA. In particular, CFA predicts 19 complexes which RNSC
is unable to predict, while RNSC predicts 2 complexes which CFA is unable to predict. Furthermore, by varying the threshold on the matching score, we show in Figure Figure66 the F-measure graphs
based on protein clusters predicted for various protein interaction networks. We observe that CFA consistently shows the best performance compared to other methods over the entire range.
F-measure graphs of CFA, CMC, MCL, PCP, and RNSC. The F-measure graphs of five mentioned methods by varying the threshold on matching scores for (A) PPI[Biogrid], (B) PPI[Gavin6], (C) PPI[Gavin2 ]and
(D) PPI[Krogan].
In the Observations section we explained that cellular component annotations can help us to improve predictions. On the other hand, by studying the connectivity number of real complexes as subgraphs
of PPI network, we showed that the connectivity number could be a reasonable criterion to predict complexes. So we present a new algorithm based on finding k-connected subgraphs (1 ≤ k) on PPI
networks segregated by informative cellular component GO terms.
A new algorithm named CFA (k-Connected Finding Algorithm) is presented here to predict complexes from an input (cleansed) PPI network. The CFA algorithm comprises two main steps. In the first step,
maximal k-connected subgraphs for various k are generated as candidate complexes. In the second step, a number of filtering rules are applied to eliminate unlikely candidates.
The heart of the first step of CFA contains two simple procedures. The first procedure is REFINE, which removes all vertices of degree less than k from the input graph. This is an obvious
optimization since, by the global version of Menger's theorem [34], such vertices cannot be part of any k-connected subgraphs. The second procedure is COMPONENT, which takes the refined graph and
fragments it into k-connected subgraphs. This procedure finds a set of h < k vertices that disconnects the input graph, producing several connected components of the graph. The procedure is then
recursively called on each of these connected components. The procedure terminates on a connected component (and returns it as a maximal k-connected subgraph) if it cannot be made disconnected by
removing h < k vertices. The correctness of this procedure follows straightforwardly from the global version of Menger's theorem.
In the second step of CFA, we call the procedures defined in the first step on larger and larger values of k until no more k-connected subgraphs are returned. This way, we obtain maximal k-connected
subgraphs for various values of k. These subgraphs are then filtered using the following three simple rules: (1) 1-connected subgraphs having diameter greater than 4 are removed. (2) k-connected
subgraphs (k ≥ 2) having diameter greater than k are removed. (3) Subgraphs of size less than 4 are removed. The pseudo codes of the CFA algorithm are given in Table Table99.
We choose fixed parameter values for each algorithm (Table (Table10).10). The implementations for RNSC and MCL are obtained from the main author of [42], Sylvian Brohee. The implementations for PCP
and CMC are obtained from the one of their authors, Limsoon Wong.
Optimal parameters for CMC, MCL, PCP and RNSC algorithms.
Authors' contributions
LW and CE conceived the project and designed the experiments. All authors contributed to conceiving and improving the proposed algorithm. MH implemented the algorithm during all stages of its
development and performed all the experiments. All authors contributed to writing the manuscript. All authors have read and approved the manuscript.
We thank Mehdi Sadeghi and Hamid Pezeshk for valuable comments and suggestions. We thank Sylvian Brohee for providing us the implementations of MCL and RNSC. We thank Hon Nian Chua for generously
allocating us time and resources on the physical interactions of PPI[BioGRID]. This research was supported in part by Shahid Beheshti University (Eslahchi); a grant from Iran's Institute for Research
in Fundamental Sciences (Eslahchi, Habibi); and a Singapore's National Research Foundation grant NRF-G-CRP-2997-04-082(d) (Wong).
• Fields S. Proteomics. Proteomics in genomeland. Science. 2001;291:1221–1224. doi: 10.1126/science.291.5507.1221. [PubMed] [Cross Ref]
• Uetz P, Giot L, Cagney G, Mansfield TA, Judson RS, Knight JR. A comprehensive analysis of protein-protein interactions in Saccharomyces cerevisiae. Nature. 2000;403:623–627. doi: 10.1038/
35001009. [PubMed] [Cross Ref]
• Ito T, Chiba T, Ozawa R, Yoshida M, Hattori M, Sakaki Y. A comprehensive two-hybrid analysis to explore the yeast protein interactome. Proc Natl Acad Sci USA. 2001;98:4569–4574. doi: 10.1073/
pnas.061034498. [PMC free article] [PubMed] [Cross Ref]
• Drees BL, Sundin B, Brazeau E, Caviston JP, Chen GC, Guo W. A protein interaction map for cell polarity development. J Cell Biol. 2001;154:549–571. doi: 10.1083/jcb.200104057. [PMC free article]
[PubMed] [Cross Ref]
• Fromont-Racine M, Mayes AE, Brunet-Simon A, Rain JC, Colley A, Dix I. Genome-wide protein interaction screens reveal functional networks involving Sm-like proteins. Yeast. 2000;17:95–110. doi:
10.1002/1097-0061(20000630)17:2<95::AID-YEA16>3.0.CO;2-H. [PMC free article] [PubMed] [Cross Ref]
• Ho Y, Gruhler A, Heilbut A, Bader GD, Moore L, Adams S-L, Millar A, Taylor P, Bennett K, Boutilier K, Yang L, Wolting C, Donaldson I, Schandorff S, Shewnarane J, Vo M, Taggart J, Goudreault M,
Muskat B, Alfarano C, Dewar D, Lin Z, Michalickova K, Willems AR, Sassi H, Nielsen PA, Rasmussen KJ, Andersen JR, Johansen LE, Hansen LH, Jespersen H, Podtelejnikov A, Nielsen E, Crawford J,
Poulsen V, Srensen BD, Matthiesen J, Hendrickson RC, Gleeson F, Pawson T, Moran MF, Durocher D, Mann M, Hogue CW, Figeys D, Tyers M. Systematic identification of protein complexes in
Saccharomyces cerevisiae by mass spectrometry. Nature. 2002;415:180–183. doi: 10.1038/415180a. [PubMed] [Cross Ref]
• Gavin AC, Bosche M, Krause R, Grandi P, Marzioch M, Bauer A, Schultz J, Rick JM, Michon AM, Cruciat CM, Remor M, Hoefert C, Schelder M, Brajenovic M, Ruffner H, Merino A, Klein K, Hudak M,
Dickson D, Rudi T, Gnau V, Bauch A, Bastuck S, Huhse B, Leutwein C, Heurtier MA, Copley RR, Edelmann A, Querfurth E, Rybin V, Drewes G, Raida M, Bouwmeester T, Bork P, Seraphin B, Kuster B,
Neubauer G, Superti-Furga G. Functional organization of the yeast proteome by systematic analysis of protein complexes. Nature. 2002;415(6868):141–147. doi: 10.1038/415141a. [PubMed] [Cross Ref]
• Gavin AC, Aloy P, Grandi P, Krause R, Boesche M, Marzioch M, Rau C, Jensen LJ, Bastuck S, Dumpelfeld B, Edelmann A, Heurtier MA, Hoffman V, Hoefert C, Klein K, Hudak M, Michon AM, Schelder M,
Schirle M, Remor M, Rudi T, Hooper S, Bauer A, Bouwmeester T, Casari G, Drewes G, Neubauer G, Rick JM, Kuster B, Bork P, Russell RB, Superti-Furga G. Proteome survey reveals modularity of the
yeast cell machinery. Nature. 2006;440(7084):631–636. doi: 10.1038/nature04532. [PubMed] [Cross Ref]
• Krogan NJ, Cagney G, Yu H, Zhong G, Guo X, Ignatchenko A, Li J, Pu S, Datta N, Tikuisis AP, Punna T, Peregrn-Alvarez JM, Shales M, Zhang X, Davey M, Robinson MD, Paccanaro A, Bray JE, Sheung A,
Beattie B, Richards DP, Canadien V, Lalev A, Mena F, Wong P, Starostine A, Canete MM, Vlasblom J, Wu S, Orsi C, Collins SR, Chandran S, Haw R, Rilstone JJ, Gandi K, Thompson NJ, Musso G, St Onge
P, Ghanny S, Lam MH, Butland G, Altaf-Ul-Amin M, Kanaya S, Shilatifard A, O'Shea E, Weissman JS, Ingles CJ, Hughes TR, Parkinson J, Gerstein M, Wodak SJ, Emili A, Greenblatt JF. Global landscape
of protein complexes in the yeast Saccharomyces cerevisiae. Nature. 2006;440(7084):637–643. doi: 10.1038/nature04670. [PubMed] [Cross Ref]
• Spirin V, Mriny LA. Protein complexes and functional modules in molecular networks. PNAS. 2003;100(21):12123–12128. doi: 10.1073/pnas.2032324100. [PMC free article] [PubMed] [Cross Ref]
• Bader GD, Hogue CW. An automated method for finding molecular complexes in large protein interaction networks. BMC Bioinformatics. 2003;4(2):1–27. [PMC free article] [PubMed]
• Pereira-Leal JB, Enright AJ, Ouzounis CA. Detection of functional modules from protein interaction networks. Proteins. 2004;54:49–57. doi: 10.1002/prot.10505. [PubMed] [Cross Ref]
• King AD, Przulj N, Jurisica I. Protein complex prediction via cost-based clustering. Bioinformatics. 2004;20(17):3013–3020. doi: 10.1093/bioinformatics/bth351. [PubMed] [Cross Ref]
• Arnau V, Mars S, Marin I. Iterative cluster analysis of protein interaction data. Bioinformatics. 2005;21(3):364–378. doi: 10.1093/bioinformatics/bti021. [PubMed] [Cross Ref]
• Lu H, Zhu X, Liu H, Skogerb G, Zhang J, Zhang Y, Cai L, Zhao Y, Sun S, Xu J, Bu D, Chen R. The interactome as a tree - an attempt to visualize the protein-protein interaction network in yeast.
Nucleic Acids Res. 2004;32(16):4804–4811. doi: 10.1093/nar/gkh814. [PMC free article] [PubMed] [Cross Ref]
• Altaf-Ul-Amin M, Shinbo Y, Mihara K, Kurokawa K, Kanaya S. Development and implementation of an algorithm for detection of protein complexes in large interaction networks. BMC Bioinformatics.
2006;7:207. doi: 10.1186/1471-2105-7-207. [PMC free article] [PubMed] [Cross Ref]
• Said MR, Begley TJ, Oppenheim AV, Lauffen-burger DA, Samson LD. Global network analysis of phenotypic effects. protein networks and toxicity modulation in Saccharomyces cerevisiae. Proc Natl Acad
Sci USA. 2004;101(52):18006–18011. doi: 10.1073/pnas.0405996101. [PMC free article] [PubMed] [Cross Ref]
• Dunn R, Dudbridge F, Sanderson CM. The use of edge-betweenness clustering to investigate biological function in protein interaction networks. BMC Bioinformatics. 2005;6:39. doi: 10.1186/
1471-2105-6-39. [PMC free article] [PubMed] [Cross Ref]
• Bandyopadhyay S, Sharan R, Ideker T. Systematic identification of functional orthologs based on protein network comparison. Genome Res. 2006;16(3):428–435. doi: 10.1101/gr.4526006. [PMC free
article] [PubMed] [Cross Ref]
• Middendorf M, Ziv E, Wiggins CH. Inferring network mechanisms. the Drosophila melanogaster protein interaction network. Proc Natl Acad Sci USA. 2005;102(9):3192–3197. doi: 10.1073/
pnas.0409515102. [PMC free article] [PubMed] [Cross Ref]
• Friedrich C, Schreiber F. Visualisation and navigation methods for typed protein-protein interaction networks. Appl Bioinformatics. 2003;2(3 Suppl):S19–S24. [PubMed]
• Ding C, He X, Meraz RF, Holbrook SR. A unified representation of multiprotein complex data for modeling interaction networks. Proteins. 2004;57:99–108. doi: 10.1002/prot.20147. [PubMed] [Cross
• Brun C, Chevenet F, Martin D, Wojcik J, Gunoche A, Jacq B. Functional classification of proteins for the prediction of cellular function from a protein-protein interaction network. Genome Biol.
2003;5:R6. doi: 10.1186/gb-2003-5-1-r6. [PMC free article] [PubMed] [Cross Ref]
• Vazquez A, Flammini A, Maritan A, Vespignani A. Global protein function prediction from protein-protein interaction networks. Nat Biotechnol. 2003;21(6):697–700. doi: 10.1038/nbt825. [PubMed] [
Cross Ref]
• Gagneur J, Krause R, Bouwmeester T, Casari G. Modular decomposition of protein-protein interaction networks. Genome Biol. 2004;5(8):R57. doi: 10.1186/gb-2004-5-8-r57. [PMC free article] [PubMed]
[Cross Ref]
• Van Dongen S. PhD thesis. Center for Mathematics and Computer Science (CWI), University of Utrecht; 2000. Graph clustering by flow simulation.
• Enright AJ, Dongen SV, Ouzounis CA. An efficient algorithm for large-scale detection of protein families. Nucleic Acids Res. 2002;30(7):1575–1584. doi: 10.1093/nar/30.7.1575. [PMC free article] [
PubMed] [Cross Ref]
• Chua HN, Sung WK, Wong L. Exploiting indirect neighbors and topological weight to predict protein function from protein-protein interactions. Bioinformatics. 2006;22:1623–1630. doi: 10.1093/
bioinformatics/btl145. [PubMed] [Cross Ref]
• Chen J, Hsu W, Lee ML, Ng SK. Discovering reliable protein interactions from high-throughput experimental data using network topology. Artificial Intelligence in Medicine. 2005;35(1):37. doi:
10.1016/j.artmed.2005.02.004. [PubMed] [Cross Ref]
• Chua HN, Ning K, Sung WK, Leong HW, Wong L. Using indirect protein interactions for the protein complex prediction. Journal of Bioinformatics and Computational Biology. 2008;6(3):435–466. doi:
10.1142/S0219720008003497. [PubMed] [Cross Ref]
• Liu G, Wong L, Chua HN. Complex discovery from weighted PPI networks. Bioinformatics. 2009;25:1891–1897. doi: 10.1093/bioinformatics/btp311. [PubMed] [Cross Ref]
• Aloy P, Bottcher B, Ceulemans H, Leutwein C, Mell-wig C, Fischer S, Gavin A, Bork P, Superti-Furga G, Serrano L, Russell RB. Structure-based assembly of protein complexes in yeast. Science. 2004;
303:2026–2029. doi: 10.1126/science.1092645. [PubMed] [Cross Ref]
• Mewes HW, Heumann K, Kaps A, Mayer K, Pfeiffer F, Stocker S, Frishman D. MIPS. a database for genomes and protein sequences. Nucleic Acids Research. 1999;27(1):44–48. doi: 10.1093/nar/27.1.44. [
PMC free article] [PubMed] [Cross Ref]
• Diestel R. Graph Theory. Springer-Verlage, Heidelberg; 2005.
• Stark C, Breitkreutz BJ, Reguly T, Boucher L, Breitkreutz A, Tyers M. BioGRID: a general repository for interaction datasets. Nucleic Acids Research. 2006;34:D535–D539. doi: 10.1093/nar/gkj109. [
PMC free article] [PubMed] [Cross Ref]
• Pellegrini M, Marcotte EM, Thompson MJ, Eisenberg D, Yeates TO. Assigning protein functions by comparative genome analysis. Protein phylogenetic profiles. Proceedings of the National Academy of
Sciences, USA. 1999;96:4285–4288. doi: 10.1073/pnas.96.8.4285. [PMC free article] [PubMed] [Cross Ref]
• Wu J, Kasif S, DeLisi C. Identification of functional links between genes using phylogenetic profiles. Bioinformatics. 2003;19:1524–1530. doi: 10.1093/bioinformatics/btg187. [PubMed] [Cross Ref]
• Dandekar T, Snel B, Huynen M, Bork P. Conservation of gene order: a fingerprint of proteins that physically interact. Trends in Biochemical Sciences. 1998;23:324–328. doi: 10.1016/S0968-0004(98)
01274-2. [PubMed] [Cross Ref]
• Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, Davis AP, Dolinski K, Dwight S, Eppig JT, Harris MA, Hill DP, Issel-Tarver L, Kasarskis A, Lewis S, Matese JC, Richardson JE,
Ringwald M, Rubin GM, Sherlock G. Gene Ontology: tool for the unification of biology. The Gene Ontology Consortium. Nature Genetics. 2000;25:25–29. doi: 10.1038/75556. [PMC free article] [PubMed]
[Cross Ref]
• Yang Y, Liu X. A re-examination of text categorization methods. Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. 1999. pp. 42–49.
• van Rijsbergen CJ. Information Retireval. Butterworths, London; 1979.
• Brohee S, van Helden J. Evaluation of clustering algorithms for protein-protein interaction networks. BMC Bioinformatics. 2006;7:488. doi: 10.1186/1471-2105-7-488. [PMC free article] [PubMed] [
Cross Ref]
Articles from BMC Systems Biology are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2949670/?tool=pubmed","timestamp":"2014-04-19T05:13:41Z","content_type":null,"content_length":"148805","record_id":"<urn:uuid:b0bf087b-4802-4e03-bf85-b24364174213>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Grid5000
Efficient exploitation of highly heterogeneous and hierarchical large-scale systems (Scheduling)
Leaders: Olivier Beaumont (CEPAGE), Frédéric Vivien (GRAAL)
Actual computing platforms tend to be hierarchical aggregations on a large scale of local systems of very different characteristics. Most of the existing works deal with triv- ially parallel problems
made of a large number of independent tasks. Many applications, however, are an aggregation of interdependent subtasks that exchange intermediate re- sults. Also, most algorithmic solutions rely on
some centralized mechanism, and are thus not scalable, and rely on a precise and comprehensive knowledge of the platforms and applications, which are supposed to have static characteristics. The
challenge is then to efficiently use such platforms to solve very demanding computing applications. To address this challenge, many different problems must be addressed:
• Because of the nature of the platforms, proposed solutions should not be centralized but rather distributed, or even use a peer-to-peer architecture. As, in practice, our a priori knowledge on
applications and platforms is fragmentary and approximate, and many characteristics evolve dynamically, the efficient use of these platforms requires the use of robust algorithmic solutions, that
is, which are able to cope with uncertainties and dynamicity. Developing such solutions can, for instance, include techniques such as sensitivity analysis and stochastic scheduling. New paradigms
such as game theory should also be explored.
• As proposed algorithmic solutions must be practical, they must be based on realistic platform models, and especially realistic models of interconnection networks. The relevant models must be
identified, and then the algorithmic implications studied.
• As data induce communications can greatly impact the applications running times, special care must be put on data localizations so as to minimize data movements, or at least contentions.
• In order to efficiently handle non trivially parallel applications, strategies must be designed to efficiently schedule task graphs and workflows. | {"url":"https://www.grid5000.fr/mediawiki/index.php/Hemera:WG:Scheduling","timestamp":"2014-04-18T05:37:07Z","content_type":null,"content_length":"19000","record_id":"<urn:uuid:358595e8-2c30-447d-ad7b-e90ce24f2f57>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparing two formulas...
April 6th 2010, 04:57 AM #1
Apr 2010
Comparing two formulas...
I have two formulas for a velocity displayed below
Vn = 175 / ( p^0.43 ) eqn 1
Ve = 135 / ( p^0.5 ) eqn 2
I know that for any value of p i choose, Equation 2 will give me a lower velocity.
I know this from a simple trial and error method, by substituting different values for p.
Besides this trial and error method, is there any way that i can prove that Ve<Vn all the time?
I have two formulas for a velocity displayed below
Vn = 175 / ( p^0.43 ) eqn 1
Ve = 135 / ( p^0.5 ) eqn 2
I know that for any value of p i choose, Equation 2 will give me a lower velocity.
I know this from a simple trial and error method, by substituting different values for p.
Besides this trial and error method, is there any way that i can prove that Ve<Vn all the time?
If Vn> Ve, then $\frac{175}{p^{0.43}}> \frac{135}{p^{0.5}}$
Assuming p is positive, that is the same as $175p^{0.5}> 135p^{0.43}$ which leads to [tex]\frac{p^{0.5}{p^.43}= p^{.07}> \frac{135}{175}= \frac{4}{5}[tex] so that $p> \left(\frac{4}{5}\)^{25}=
.0038$, approximately. So this is NOT true for [b]all[b] p but is true for p> .004.
If Vn> Ve, then $\frac{175}{p^{0.43}}> \frac{135}{p^{0.5}}$
Assuming p is positive, that is the same as $175p^{0.5}> 135p^{0.43}$ which leads to [tex]\frac{p^{0.5}{p^.43}= p^{.07}> \frac{135}{175}= \frac{4}{5}[tex] so that $p> \left(\frac{4}{5}\)^{25}=
.0038$, approximately. So this is NOT true for all p but is true for p> .004.
Sorry, but i cannot read ur equations properly,
can u retype it for me?
(esp the below line)
[b][b][b][b][tex]\frac{p^{0.5}{p^.43}= p^{.07}> \frac{135}{175}= \frac{4}{5}[tex] so that $p> \left(\frac{4}{5}\)^{25}= .0038$, approximately. So this is NOT true for all p but is true for p>
i specifically cannot understand how u got, p^0.5 x p^0.43 = p^0.07
April 6th 2010, 05:19 AM #2
MHF Contributor
Apr 2005
April 6th 2010, 05:26 AM #3
Apr 2010
April 6th 2010, 09:01 AM #4
Apr 2010 | {"url":"http://mathhelpforum.com/advanced-algebra/137550-comparing-two-formulas.html","timestamp":"2014-04-18T05:11:55Z","content_type":null,"content_length":"40509","record_id":"<urn:uuid:e6196620-180c-4117-b795-7c1dcad33d35>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quick Links
Undergraduate Advising and Research
Parker House, HB 6201
Hanover, NH
03755-3529 Phone: 603.646.3690Fax: 603.646.8190Email:
Undergraduate research
Basic Structure of the Department
• The Mathematics department offers a single major, three modified majors (with biology, philosophy and complex systems) and nine minors. Of course, any student may construct a modified major with
math and any other discipline.
• Within the single major, the student has considerable flexibility. Several structures for the major (pure math, applied math, and math education [for those who intend to teach math at the
secondary level]) are described in the ORC, and can be fine tuned with consultation with the department advisor to majors (Dana Williams 6-2990).
• The Mathematics minors:
1. Mathematics
2. Applied Mathematics for Physical and Engineering Sciences
3. Applied Mathematics for Biological and Social Sciences
4. Mathematical Biology
5. Mathematical Logic
6. Mathematical Physics
7. Mathematical Finance
8. Complex Systems
9. Minor in Statistics
• Note: First-year students interested in taking mathematics should begin with the course indicated by their placement results (if any), and continue accordingly with the appropriate sequence.
Please refer to Math Placement and Sequencing for more information on specific courses.
Professor of Math Scott Pauls (646-1047) serves as the First-Year Advisor, and is available for consultation. A phone call is the best way to get immediate advice.
Courses for the Student with Little or No Background Who Wants to Explore Mathematics
• Students who wish to explore mathematics but who are not likely to take courses which require some mathematics (e.g. science and/or engineering courses), good options include one of the MATH 5
offerings, MATH 6, and/or MATH 10.
• Students who are likely to take courses which require some mathematics should take a calculus course – most likely MATH 3 or MATH 1-2 depending on their placement.
Information for the First-Year Student who Plans on Pursuing Mathematics
• Students who see themselves as potential math majors, modified majors, minors or who simply plan to pursue mathematics in-depth, are encouraged to contact Professor Dana Williams, the advisor to
majors, early in their Dartmouth career to discuss plans of study in order to best fit their curricular goals. Most students in this situation are encouraged to finish the calculus sequence and
MATH 22/24 as quickly as possible. Beyond those courses, students are encouraged to pick courses based on their interest.
Basic Information about Courses in Math
Refer to Math Placement and Sequencing on for more information
• MATH 1-2 is a fall-winter sequence for first-year students, by invitation only (as indicated on placement record), and is determined pre-matriculation. It covers over two terms what MATH 3 covers
in one. A student who places into 1-2 and wants to enroll must enroll in the fall term of their first year. The sequence is only offered fall-winter and is only available to first-year students.
Completing 1-2 makes a student eligible for MATH 8.
• MATH 3 (Intro to Calculus), MATH 8 (Calculus of Functions of One and Several Variables), and MATH 13 (Calculus of Vector-Valued Functions), form the basic calculus sequence.
• MATH 3 and 8 comprise material much of which is often covered in a high school curriculum but with the expectations and demands of college-level work.
• MATH 3, 8 and 13 are service courses for other departments (engineering, physics, etc.).
• MATH 5 is targeted for non-majors and fulfills the QDS requirement.
• MATH 10 (Introduction to Statistics) is targeted at non-majors who need basic statistics training.
• MATH 11, offered in the fall, is designed specifically for the first-year student who places out of 3 and 8. MATH 11 includes some material covered in MATH 8, and can be viewed as an equivalent
for MATH 13 (see below for more information). MATH 12 is the honors section of MATH 11, and explores the broader role of calculus within mathematics and the sciences.
• MATH 17 is designed for first-year students with credit for 3, 8 and often 13, and particularly motivated and interested in mathematics. The aim is to introduce a potential math major to
interesting questions in the discipline of mathematics before the student undergoes the rigors of the major. After taking 17, a student would probably continue with MATH 12, 13 or 14 (if they
have not already taken one of these) or MATH 22/24, or 23.
• A student should take MATH 22 (Linear Algebra with Applications) or 24 (the honors section of 22) before he or she decides to major in math. MATH 22/24 (and not the calculus sequence) constitutes
the introduction to higher-level abstract mathematics characteristic of the discipline.
Current Enrollments, Class Size, and Distributives | {"url":"http://www.dartmouth.edu/~ugar/premajor/courses/math.html","timestamp":"2014-04-20T06:01:15Z","content_type":null,"content_length":"16737","record_id":"<urn:uuid:4b201cf6-061b-4ff6-88d0-dd014d1bc28a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics 11 CONCLUSIONS AND RECOMMENDATIONS To many people, school mathematics is virtually a phenomenon of nature. It seems timeless, set in stone—ard to
change and perhaps not needing to change. But the school mathematics education of yesterday, which had a practical basis, is no longer viable. Rote learning of arithmetic procedures no longer has the
clear value it once had. The widespread availability of technological tools for computation means that people are less dependent on their own powers of computation. At the same time, people are much
more exposed to numbers and quantitative ideas and so need to deal with mathematics on a higher level than they did just 20 years ago. Too few U.S. students, however, leave elementary and middle
school with adequate mathematical knowledge, skill, and confidence for anyone to be satisfied that all is well in school mathematics. Moreover, certain segments of the U.S. population are not well
represented among those who succeed in learning mathematics. Widespread failure to learn mathematics limits individual possibilities and hampers national growth. Our experiences, discussions, and
review of the literature have convinced us that school mathematics demands substantial change. We recognize that such change needs to be undertaken carefully and deliberately, so that every child has
both the opportunity and support necessary to become proficient in mathematics. Our experiences, discussions, and review of the literature have convinced us that school mathematics demands
substantial change. In this chapter, we present conclusions and recommendations to help move the nation toward the change needed in school mathematics. In the preceding chapters, we have offered
citations of research studies and of theoretical analyses, but we recognize that clear, unambiguous evidence is not available to address many of the important issues we have raised. It should
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics be obvious that much additional research will be needed to fill out the picture, and we have recommended some directions for that research to take.
The remaining recommendations reflect our consensus that the relevant data and theory are sufficiently persuasive to warrant movement in the direction indicated, with the proviso that more evidence
will need to be collected along the way. Information is now becoming available as to the effects on students’ learning in new curriculum programs in mathematics that are different from those programs
common today. Over the coming years, the volume of that information is certain to increase. The community of people concerned with mathematics education will need to pay continued attention to
studies of the effectiveness of new programs and will need to examine the available data carefully. In writing this report we were able to use few such studies because they were just beginning to be
published. We expect them collectively to provide valuable information that will warrant careful review at a later date by a committee like ours. Our report has concentrated on learning about
numbers, their properties, and operations on them. Although number is the centerpiece of pre-K to grade 8 mathematics, it is not the whole story, as we have noted more than once. Our reading of the
scholarly literature on number, together with our experience as teachers, creators, and users of mathematics, has yielded observations that might be applied to other components of school mathematics
such as measurement, geometry, algebra, probability, and data analysis. Number is used in learning concepts and processes from all these domains. Below we present some comprehensive recommendations
concerning mathematical proficiency that cut across all domains of policy, practice, and research. Then we propose changes needed in the curriculum if students are to develop mathematical
proficiency, and we offer some recommendations for instruction. Finally, we discuss teacher preparation and professional development related to mathematics teaching, setting out recommendations
designed to help teachers be more proficient in their work. Mathematical Proficiency As a goal of instruction, mathematical proficiency provides a better way to think about mathematics learning than
narrower views that leave out key features of what it means to know and be able to do mathematics. Mathematical proficiency, as defined in chapter 4, implies expertise in handling mathematical ideas.
Students with mathematical proficiency understand basic
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics concepts, are fluent in performing basic operations, exercise a repertoire of strategic knowledge, reason clearly and flexibly, and maintain a
positive outlook toward mathematics. Moreover, they possess and use these strands of mathematical proficiency in an integrated manner, so that each reinforces the others. It takes time for
proficiency to develop fully, but in every grade in school students can demonstrate mathematical proficiency in some form. In this report we have concentrated on those ideas about number that are
devel-oped in grades pre-K through 8. We must stress, however, that proficiency spans all parts of school mathematics and that it can and should be developed every year that students are in school.
In every grade in school, students can demonstrate mathematical proficiency in some form. All young Americans must learn to think mathematically, and they must think mathematically to learn. We have
elaborated on what such learning and thinking entail by proposing five strands of mathematical proficiency to be developed in school. The overriding premise of our work is that throughout the grades
from pre-K through 8 all students can and should be mathematically proficient. That means they understand mathematical ideas, compute fluently, solve problems, and engage in logical reasoning. They
believe they can make sense out of mathematics and can use it to make sense out of things in their world. For them mathematics is personal and is important to their future. School mathematics in the
United States does not now enable most students to develop the strands of mathematical proficiency in a sound fashion. Proficiency for all demands that fundamental changes be made concurrently in
curriculum, instructional materials, classroom practice, teacher preparation, and professional development. These changes will require continuing, coordinated action on the part of policy makers,
teacher educators, teachers, and parents. Although some readers may feel that substantial advances are already being made in reforming mathematics teaching and learning, we find real progress toward
mathematical proficiency to be woefully inadequate. These observations led us to five general recommendations regarding mathematical proficiency that reflect our vision for school mathematics. School
mathematics in the United States does not now enable most students to develop the strands of mathematical proficiency in a sound fashion. The integrated and balanced development of all five strands
of math ematical proficiency should guide the teaching and learning of school math ematics. Instruction should not be based on extreme positions that students learn, on the one hand, solely by
internalizing what a teacher or book says or, on the other hand, solely by inventing mathematics on their own. Teachers’ professional development should be high quality, sustained, and systematically
designed and deployed to help all students develop math-
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics ematical proficiency. Schools should support, as a central part of teachers’ work, engagement in sustained efforts to improve their mathematics
instruc tion. This support requires the provision of time and resources. The coordination of curriculum, instructional materials, assessment, instruction, professional development, and school
organization around the development of mathematical proficiency should drive school improvement efforts. Efforts to improve students’ mathematics learning should be informed by scientific evidence,
and their effectiveness should be evaluated system atically. Such efforts should be coordinated, continual, and cumulative. Additional research should be undertaken on the nature, develop ment, and
assessment of mathematical proficiency. These recommendations are augmented in the discussion below. In that discussion we propose additional recommendations that detail some of the policies and
practices needed if all children are to be mathematically proficient. Curriculum The balanced and integrated development of all five strands of mathematical proficiency requires that various elements
of the school curriculum— goals, core content, learning activities, and assessment efforts—be coordinated toward the same end. Achieving that coordination puts heavy demands on instructional
programs, on the materials used in instruction, and on the way in which instructional time is managed. The curriculum has to be organized within and across grades so that time for learning is used
effectively. Instead of cursory and repeated treatments of a topic, the curriculum should be focused on important ideas, allowing them to be developed thoroughly and treated in depth. The
unproductive recycling of mathematical content is to be avoided, but students need ample opportunities to review and consolidate their knowledge. Instead of cursory and repeated treatments of a
topic, the curriculum should be focused on important ideas, allowing them to be developed thoroughly and treated in depth. Building an Informal Knowledge Most children in the United States enter
school with an extensive stock of informal knowledge about numbers from the counting they have done, from hearing number words and seeing number symbols used in everyday
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics life, and from various experiences in judging and comparing quantities. Many are also familiar with various patterns and some geometric shapes. This
knowledge serves as a basis for developing mathematical proficiency in the early grades. The level of children’s knowledge, however, varies greatly across socioeconomic and ethnic groups. Some
children have not had the experiences necessary to build the informal knowledge they need before they enter school. A number of interventions have demonstrated that any immaturity of mathematical
development can be overcome with targeted instructional activities. Parents and other caregivers, through games, puzzles, and other activities in the home, can also help children develop their
informal knowledge and can augment the school’s efforts. Just as adults in the home can help children avoid reading difficulties through activities that promote language and literacy growth, so too
can they help children avoid difficulties in mathematics by helping them develop their informal knowledge of number, pattern, shape, and space. Support from home and school can have a catalytic
effect on children’s mathematical development, and the sooner that support is provided, the better: School and preschool programs should provide rich activities with numbers and operations from the
very beginning, especially for children who enter without these experiences. Efforts should be made to educate parents and other caregivers as to why they should, and how they can, help their
children develop a sense of number and shape. Learning Number Names Research has shown that the English number names can inhibit children’s understanding of base-10 properties of the decimal system
and learning to use numerals meaningfully. Names such as “twelve” and “fifteen” do not make clear to children that 12=10+2 and 15=10+5. These connections are more obvious in some other languages.
U.S. children, therefore, often need extra help in understanding the base-ten organization underlying number names and in seeing quantities organized into hundreds, tens, and ones. Conceptual
supports (objects or diagrams) that show the magnitude of the quantities and connect them to the number names and written numerals have been found to help children acquire insight into the base-10
number system. That insight is important to learning and
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics understanding numerals and also to developing strategies for solving problems in arithmetic. So that number names will be understood and used
correctly, we recommend the following: Mathematics programs in the early grades should make extensive use of appropriate objects, diagrams, and other aids to ensure that all children understand and
are able to use number words and the base-10 properties of numerals, that all children can use the language of quantity (hundreds, tens, and ones) in solving problems, and that all children can
explain their reason ing in obtaining solutions. Learning About Numbers The number systems of pre-K-8 mathematics—the whole numbers, integers, and rational numbers—form a coherent structure. For each
of these systems, there are various ways to represent the numbers themselves and the operations on them. For example, a rational number might be represented by a decimal or in fractional form. It
might be represented by a word, a symbol, a letter, a point or length on a line, or a portion of a figure. Proficiency with numbers in the elementary and middle grades implies that students can not
only appreciate these different notations for a number but also can translate freely from one to another. It also means that they see connections among numbers and operations in the different number
systems. As a consequence of many instructional programs, students have had severe difficulty representing, connecting, and using numbers other than whole numbers. Innovations that link various
representations of numbers and situations in which numbers are used have been shown to produce learning with understanding. Creating this kind of learning will require changes in all parts of school
mathematics to ensure that the following recommendations are implemented: An integrated approach should be taken to the development of all five strands of proficiency with whole numbers, integers,
and rational numbers to ensure that students in grades pre-K-8 can use the numbers flu ently and flexibly to solve challenging but accessible problems. In particular, procedures for calculation
should frequently be linked to various represen tations and to situations in which they are used so that all strands are brought into play.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics The conceptual bases for operations with numbers and how those operations relate to real situations should be a major focus of the curricu lum.
Addition, subtraction, multiplication, and division should be presented initially with real situations. Students should encounter a wide range of situations in which those operations are used.
Different ways of representing numbers, when to use a specific rep resentation, and how to translate from one representation to another should be included in the curriculum. Students should be given
opportunities to use these different representations to carry out operations and to understand and explain these operations. Instructional materials should include visual and linguistic supports to
help students develop this representational ability. Operating with Single-Digit Numbers Learning to operate with single-digit numbers has long been characterized in the United States as “learning
basic facts,” and the emphasis has been on rote memorization of those facts, also known as basic number combinations. For adults the simplicity of calculating with single-digit numbers often masks
the complexity of learning those combinations and the many different methods children can use in carrying out such calculations. Research has shown that children move through a fairly well-defined
sequence of solution methods in learning to perform operations with single-digit numbers, particularly for addition and subtraction, where rapid general procedures exist. Children progress from using
physical objects for representing problem situations to using more sophisticated counting and reasoning strategies, such as deriving one number combination from another (e.g., finding 7+8 by knowing
that it is 1 more than 7+7 or, similarly, finding 7×6 as 7 more than 7×5). They know that addition and multiplication are commutative and that there is a relation between addition and subtraction and
between multiplication and division. They use patterns in the multiplication table as the basis for learning the products of single-digit numbers. Instruction that takes such research into account is
needed if students are to become proficient: Children should learn single-digit number combinations with un derstanding. Instructional materials and classroom teaching should help students learn
increasingly abbreviated procedures for producing number combinations rapidly and accurately without always having to refer to tables or other aids.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Learning Numerical Algorithms We believe that algorithms and their properties are important mathematical ideas that all students need to understand.
An algorithm is a reliable step-by-step procedure for solving problems. To perform arithmetic calculations, children must learn how numerical algorithms work. Some algorithms have been well
established through centuries of use; others may be invented by children on their own. The widespread availability of calculators for performing calculations has greatly reduced the level of skill
people need to acquire in performing multidigit calculations with paper and pencil. Anyone who needs to perform such calculations routinely today will have a calculator, or even a computer, at hand.
But the technology has not made obsolete the need to understand and be able to perform basic written algorithms for addition, subtraction, multiplication, and division of numbers, whether expressed
as whole numbers, fractions, or decimals. Beyond providing tools for computation, algorithms can be analyzed and compared, which can help students understand the nature and properties of operations
and of place-value notation for numbers. In our view, algorithms, when well understood, can serve as a valuable basis for reasoning about mathematics. Students acquire proficiency with multidigit
numerical algorithms through a progression of experiences that begin with the students modeling various problem situations. They then can learn algorithms that are easily understood because of
obvious connections to the quantities involved. Eventually, students can learn and use methods that are more efficient and general, though perhaps less transparent. Proficiency with numerical
algorithms is built on understanding and reasoning, as well as frequent opportunity for use. Two recommendations reflect our view of the role of numerical algorithms in grades pre-K-8: For addition,
subtraction, multiplication, and division, all students should understand and be able to carry out an algorithm that is general and reasonably efficient. Students should be able to use adaptive
reasoning to analyze and compare algorithms, to grasp their underlying principles, and to choose with discrimination algorithms for use in different contexts.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Using Estimation and Mental Arithmetic The accurate and efficient use of an algorithm rests on having a sense of the magnitude of the result.
Estimation techniques enable students not only to check whether they are performing an operation correctly but also to decide whether that operation makes sense for the problem they are solving. The
base-10 structure of numerals allows certain sums, differences, products, and quotients to be computed mentally. Activities using mental arithmetic develop number sense and increase flexibility in
using numbers. Mental arithmetic also simplifies other computations and estimations. For example, dividing by 0.25 is the same as multiplying by 4, which can be found by doubling twice. Whether or
not students are performing a written algorithm, they can use mental arithmetic to simplify certain operations with numbers. Techniques of estimation and of mental arithmetic are particularly
important when students are checking results obtained from a calculator or computer. If children are not encouraged to use the mental computational procedures they have when entering school, those
procedures will erode. But when instruction emphasizes estimation and mental arithmetic, conceptual understanding and fluency with mental procedures can be enhanced. Our recommendation about
estimation and computation, whether mental or written, is as follows: Whether or not students are performing a written algorithm, they can use mental arithmetic to simplify certain operations with
numbers. The curriculum should provide opportunities for students to develop and use techniques for mental arithmetic and estimation as a means of pro moting a deeper number sense. Representing and
Operating with Rational Numbers Rational numbers provide the first number system in which all the operations of arithmetic, including division, are possible. These numbers pose a major challenge to
young learners, in part because each rational number can represent so many different situations and because there are several different notational schemes for representing the same rational number,
each with its own method of calculation. An important part of learning about rational numbers is developing a clear sense of what they are. Children need to learn that rational numbers are numbers in
the same way that whole numbers are numbers. For children to use rational numbers to solve problems, they need to learn that the same rational number may be represented in different ways, as a
fraction, a decimal, or a percent. Fraction concepts and representations need to be related
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics to those of division, measurement, and ratio. Decimal and fractional representations need to be connected and understood. Building these connections
takes extensive experience with rational numbers over a substantial period of time. Researchers have documented that difficulties in working with rational numbers can often be traced to weak
conceptual understanding. For example, the idea that a fraction gets smaller when its denominator becomes larger is difficult for children to accept when they do not understand what the fraction
represents. Children may try to apply ideas they have about whole numbers to rational numbers and run into trouble. Instructional sequences in which more time is spent at the outset on developing
meaning for the various representations of rational numbers and the concept of unit have been shown to promote mathematical proficiency. Research reveals that the kinds of errors students make when
beginning to operate with rational numbers often come because they have not yet developed meaning for these numbers and are applying poorly understood rules for whole numbers. Operations with
rational numbers challenge students’ naïve understanding of multiplication and division that multiplication “makes bigger” and division “makes smaller.” Although there is limited research on
instructional programs for developing proficiency with computations involving rational numbers, approaches that build on students’ intuitive understanding and that use objects or contexts that help
students make sense of the operations offer more promise than rule-based approaches. We make the following recommendation concerning the rational numbers: The curriculum should provide opportunities
for students to develop a thorough understanding of rational numbers, their various representa tions including common fractions, decimal fractions, and percents, and operations on rational numbers.
These opportunities should involve con necting symbolic representations and operations with physical or pictorial representations, as well as translating between various symbolic represen tations.
Extending the Place-Value System The system of Hindu-Arabic numerals—in which there is a decimal point and each place to the right and the left is associated with a different power of 10—is one of
humanity’s greatest inventions for thinking about and operating with numbers. Mastery of that system does not come easily, however. Students need assistance not only in using the decimal system but
also in understanding its structure and how it works.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Conceptual understanding and procedural fluency with multidigit numbers and decimal fractions require that students understand and use the base-10
quantities represented by number words and number notation. Research indicates that much of students’ difficulty with decimal fractions stems from their failure to understand the base-10
representations. Decimal representations need to be connected to multidigit whole numbers as groups getting 10 times larger (to the left) and one tenth as large (to the right). Referents (diagrams or
objects) showing the size of the quantities in different decimal places can be helpful in understanding decimal fractions and calculations with them. The following recommendation expresses our
concern that the decimal system be given a central place in the curriculum: The curriculum should devote substantial attention to developing an understanding of the decimal place-value system, to
using its features in calculating and problem solving, and to explaining calculation and problem- solving methods with decimal fractions. Developing Proportional Reasoning The concept of ratio is
much more difficult than many people realize. Proportional reasoning is the term given to reasoning that involves the equality and manipulation of ratios. Children often have difficulty comparing
ratios and using them to solve problems. Many school mathematics programs fail to develop children’s understanding of ratio comparisons and move directly to formal procedures for solving
missing-value proportion problems. Research tracing the development of proportional reasoning shows that proficiency grows as students develop and connect different aspects of proportional reasoning.
Further, the development of proportional reasoning can be supported by having students explore proportional situations in a variety of problem contexts using concrete materials or through data
collection activities. We see ratio and proportion as underdeveloped components of grades pre-K-8 mathematics: The curriculum should provide extensive opportunities over time for students to explore
proportional situations concretely, and these situa tions should be linked to formal procedures for solving proportion problems whenever such procedures are introduced.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Efforts to develop textbooks and other instructional materials should include research into how teachers can understand and use those materials
effectively. A government agency or research foundation should fund an inde pendent group to analyze textbooks and other instructional materials for the extent to which they promote mathematical
proficiency. The group should recommend how these materials might be modified to promote greater math ematical proficiency. Giving Time to Instruction Research indicates that a key requirement for
developing proficiency is the opportunity to learn. In many U.S. elementary and middle school classrooms, students are not engaged in sustained study of mathematics. On some days in some classes they
are spending little or no time at all on the subject. Mathematical proficiency as we have defined it cannot be developed unless regular time (say, one hour each school day) is allocated to and used
for mathematics instruction in every grade of elementary and middle school. Further, we believe the strands of proficiency will not develop in a coordinated fashion unless continual attention is
given to every strand. The following recommendation expresses our concern that mathematics be given its rightful place in the curriculum: Mathematical proficiency as we have defined it cannot be
developed unless regular time is allocated to and used for mathematics instruction in every grade of elementary and middle school. Substantial time should be devoted to mathematics instruction each
school day, with enough time devoted to each unit and topic to enable stu dents to develop understanding of the concepts and procedures involved. Time should be apportioned so that all strands of
mathematical proficiency together receive adequate attention. Giving Students Time to Practice Practice is important in the development of mathematical proficiency. When students have multiple
opportunities to use the computational procedures, reasoning processes, and problem-solving strategies they are learning, the methods they are using become smoother, more reliable, and better
understood. Practice alone does not suffice; it needs to be built on understanding and accompanied by feedback. In fact, premature practice has been shown to be harmful. The following recommendation
reflects our view of the role of practice:
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Practice should be used with feedback to support all strands of math ematical proficiency and not just procedural fluency. In particular, practice
on computational procedures should be designed to build on and extend under standing. Using Assessment Effectively At present, substantial time every year is taken away from mathematics instruction
in U.S. classrooms to prepare for and take externally mandated assessments, usually in the form of tests. Often, those tests are not well articulated with the mathematics curriculum, testing content
that has not been taught during the year or that is not central to the development of mathematical proficiency. Preparation for such tests, moreover, does not ordinarily focus on the development of
proficiency. Instead, much time is given to practicing calculation procedures and reviewing a multitude of topics. Teachers and students often waste valuable learning time because they are not
informed about the content to be tested or the form that test items will take. We believe that assessment, whether externally mandated or developed by the teacher, should support the development of
students’ mathematical proficiency. It needs to provide opportunities for students to learn rather than taking time away from their learning. Assessments in which students are learning as well as
showing what they have already learned can provide valuable information to teachers, schools, districts, and states, as well as the students themselves. Such assessments help teachers modify their
instruction to support better learning at each grade level. Time and money spent on assessment need to be used more effectively so that students have the opportunity to show what they know and can
do. Teachers need to receive timely and detailed information about students’ performance on each external assessment. In that way, students and teachers alike can learn from assessments instead of
having assessments used only to rank students, teachers, or schools. The following recommendations will help make assessment more effective in developing mathematical proficiency: Students and
teachers alike can learn from assessments instead of having assessments used only to rank students, teachers, or schools. Assessment, whether internal or external, should be focused on the
development and achievement of mathematical proficiency. In particular, assessments used to determine qualification for state and federal funding should reflect the definition of mathematics
proficiency presented in this report.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Information about the content and form of each external assessment should be provided so that teachers and students can prepare appropriately and
efficiently. The results of each external assessment should be reported so as to provide feedback useful for teachers and learners rather than simply a set of rankings. A government agency or
research foundation should fund an inde pendent group to analyze external assessment programs for the extent to which they promote mathematical proficiency. The group should recommend how programs
might be modified to promote greater mathematical proficiency. Instruction Effective teaching—teaching that fosters the development of mathematical proficiency over time—can take a variety of forms.
Consequently, we endorse no single approach. All forms of instruction configure relations among teachers, students, and content. The quality of instruction is a function of teachers’ knowledge and
use of mathematical content, teachers’ attention to and handling of students, and students’ engagement in and use of mathematical tasks. The development of mathematical proficiency requires
thoughtful planning, careful execution, and continual improvement of instruction. It depends critically on teachers who understand mathematics, how students learn, and the classroom practices that
support that learning. They also need to know their students: who they are, what their backgrounds are, and what they know. The development of mathematical proficiency requires thoughtful planning,
careful execution, and continual improvement of instruction. Planning for Instruction Planning, whether for one lesson or a year, is often viewed as routine and straightforward. However, plans seldom
elaborate the content that the students are to learn or develop good maps of paths to take to reach learning goals. We believe that planning needs to reflect a deep and thorough consideration of the
mathematical content of a lesson and of students’ thinking and learning. Instructional materials need to support teachers in their planning, and teachers need to have time to plan. Instruction needs
to be planned with the development of mathematical proficiency in mind:
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Content, representations, tasks, and materials should be chosen so as to develop all five strands of proficiency toward the big ideas of math
ematics and the goals for instruction. Planning for instruction should take into account what students know, and instruction should provide ways of ascertaining what students know and think as well
as their interests and needs. Rather than simply listing problems and exercises, teachers should plan for instruction by focusing on the learning goals for their students, keep ing in mind how the
goals for each lesson fit with those of past and future lessons. Their planning should anticipate the events in the lesson, the ways in which the students will respond, and how those responses can be
used to further the lesson goals. Managing Classroom Discourse Mathematics classrooms are more likely to be places in which mathematical proficiency develops when they are communities of learners and
not collections of isolated individuals. Research on creating classrooms that function as communities of learners has identified several important features of these classrooms: ideas and methods are
valued, students have autonomy in choosing and sharing solution methods, mistakes are valued as sites of learning for everyone, and the authority for correctness lies in logic and the structure of
the subject, not in the teacher. In such classrooms the teacher plays a key role as the orchestrator of the discourse students engage in about mathematical ideas. Teachers are responsible for moving
the mathematics along while affording students opportunities to offer solutions, make claims, answer questions, and provide explanations to their peers. Teachers need to help bring a mathematical
discussion to a close, making sure that gaps have been filled and errors addressed. To develop mathematical proficiency, we believe that students require more than just the demonstration of
procedures. They need experience in investigating mathematical properties, justifying solution methods, and analyzing problem situations. We recommend the following: A significant amount of class
time should be spent in developing math ematical ideas and methods rather than only practicing skills.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Questioning and discussion should elicit students’ thinking and solu tion strategies and should build on them, leading to greater clarity and
precision. Discourse should not be confined to answers only but should include discussion of connections to other problems, alternative representations and solution methods, the nature of
justification and argumentation, and the like. Linking Experience to Abstraction Students acquire higher levels of mathematical proficiency when they have opportunities to use mathematics to solve
significant problems as well as to learn the key concepts and procedures of that mathematics. Although mathematics gains power and generality through abstraction, it finds both its sources and
applications in concrete settings, where it is made meaningful to the learner. There is an inevitable dialectic between concrete and abstract in which each helps shape the other. Exhortations to
“begin with the concrete” need to consider carefully what is meant by concrete. Research reveals that various kinds of physical materials commonly used to help children learn mathematics are often no
more concrete to them than symbols on paper might be. Concrete is not the same as physical. Learning begins with the concrete when meaningful items in the child’s immediate experience are used as
scaffolding with which to erect abstract ideas. To ensure that progress is made toward mathematical abstraction, we recommend the following: Links among written and oral mathematical expressions,
concrete problem settings, and students’ solution methods should be continually and explicitly made during school mathematics instruction. Assigning Independent Work Part of becoming proficient in
mathematics is becoming an independent learner. For that purpose, many teachers give homework. The limited research on homework in mathematics has been confined to investigations of the relation
between the quantity of homework assigned and students’ achievement test scores. Neither the quality nor the function of homework has been studied. Homework can have different purposes. For example,
it might be used to practice skills or to prepare the student for the next lesson. We believe that independent work serves several useful purposes. Regarding independence and homework, we make the
following recommendations:
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Students should be provided opportunities to work independently of the teacher both individually and in pairs or groups. When homework is assigned
for the purpose of developing skill, stu dents should be sufficiently familiar with the skill and the tasks so that they are not practicing incorrect procedures. Using Calculators and Computers In
the discussion above, we mention the special role that calculators and computers can play in learning algebra. But they have many other roles to play throughout instruction in grades pre-K-8. Using
calculators and computers does not replace the need for fluency with other methods. Confronted with a complex arithmetic problem, students can use calculators and computers to see beyond tedious
calculations to the strategies needed to solve the problem. Technology can relieve the computational burden and free working memory for higher-level thinking so that there can be a sharper focus on
an important idea. Further, skillfully planned calculator investigations may reveal subtle or interesting mathematical ideas, such as the rules for order of operations. A large number of empirical
studies of calculator use, including long-term studies, have generally shown that the use of calculators does not threaten the development of basic skills and that it can enhance conceptual
understanding, strategic competence, and disposition toward mathematics. For example, students who use calculators tend to show improved conceptual understanding, greater ability to choose the
correct operation, and greater skill in estimation and mental arithmetic without a loss of basic computational skills. They are also familiar with a wider range of numbers than students who do not
use calculators and are better able to tackle realistic mathematics problems. Just like any instructional tool, calculators and computers can be used more or less effectively. Our concern is that,
when computing technology is used, it needs to contribute positively: When computing technology is used, it needs to contribute positively. In all grades of elementary and middle school, any use of
calculators and computers should be done in ways that help develop all strands of stu dents’ mathematical proficiency.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Teacher Preparation and Professional Development One critical component of any plan to improve mathematics learning is the preparation and
professional development of teachers. If the goal of mathematical proficiency as portrayed in this report is to be reached by all students in grades pre-K to 8, their teachers will need to understand
and practice techniques of teaching for that proficiency. Our view of mathematics proficiency requires teachers to act in new ways and to have understanding that they once were not expected to have.
In particular, it is not a teacher’s fault that he or she does not know enough to teach in the way we are asking. It is a far from trivial task to acquire such understanding—something that cannot
reasonably be expected to happen in one’s spare time and something that will require major policy changes to support and promote. Teacher preparation and professional development programs will need
to develop proficiency in mathematics teaching, which has many parallels to proficiency in mathematics. Developing Specialized Knowledge The knowledge required to teach mathematics well is
specialized knowledge. It includes an integrated knowledge of mathematics, knowledge of the development of students’ mathematical understanding, and a repertoire of pedagogical practices that take
into account the mathematics being taught and the students learning it. The evidence indicates that these forms of knowledge are not acquired in conventional undergraduate mathematics courses,
whether they are general survey courses or specialized courses for mathematics majors. The implications for teacher preparation and professional development are that teachers need to learn these
forms of knowledge in ways that help them forge connections. Very few teachers currently have the specialized knowledge needed to teach mathematics in the way envisioned in this report. Mathematical
knowledge is a critical resource for teaching. Therefore, teacher preparation and professional development must provide significant and continuing opportunities for teachers to develop profound and
useful mathematical knowledge. Teachers need to know the mathematics of the curriculum and where the curriculum is headed. They need to understand the connections among mathematical ideas and how
they develop. Teachers also need to be able to unpack mathematical content and make visible to students the ideas behind the concepts and procedures. Finally, teachers need not only mathematical
proficiency but also the ability to use it in guiding discussions, modifying problems, and making decisions about what matters to pursue in class and what to let drop. Very few teachers currently
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics the specialized knowledge needed to teach mathematics in the way envisioned in this report. Although it is not reasonable in the short term to
expect all teachers to acquire such knowledge, every school needs access to expertise in mathematics teaching. Teachers’ opportunities to learn can help them develop their own knowledge about
mathematics, about children’s thinking about mathematics, and about mathematics teaching. Such opportunities can also help teachers learn how to solve the sorts of problems that are central to the
practice of teaching. The following recommendations reflect our judgment concerning the specialized knowledge that teachers need: Teachers of grades pre-K-8 should have a deep understanding of the
mathematics of the school curriculum and the principles behind it. Programs and courses that emphasize “classroom mathematical knowledge” should be established specifically to prepare teachers to
teach mathematics to students in such grades as pre-K-2, 3–5, and 6–8. Teachers should learn how children’s mathematical knowledge develops and what their students are likely to bring with them to
school. To provide a basis for continued learning by teachers, their prepa ration to teach, their professional development activities, and the instruc tional materials they use should engage them,
individually and collectively, in developing a greater understanding of mathematics and of student thinking and in finding ways to put that understanding into practice. All teachers, whether
preservice or inservice, should engage in inquiry as part of their teaching practice (e.g., by interacting with students and analyzing their work). Through their preparation and professional
development, teachers should develop a repertoire of pedagogical techniques and the ability to use those techniques to accomplish lesson goals. Mathematics specialists—teachers who have special
training and interest in mathematics—should be available in every elementary school.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics Working Together Elementary and middle school teachers in the United States report spending relatively little time, compared with their counterparts
in other countries, discussing the mathematics they are teaching or the methods they are using. They seldom plan lessons together, observe one another teach, or analyze students’ work collectively.
Studies of programs that require teachers to teach mathematically demanding curricula suggest that success is greater when teachers help one another not only learn the mathematics and learn about
student thinking but also practice new teaching strategies. Our recommendation concerning time is not just about how much is available but how it is used: Teachers should be provided with more time
for planning and con ferring with each other on mathematics instruction with appropriate sup port and guidance. Capitalizing on Professional Meetings Teachers need more mathematically focused
opportunities to learn mathematics, and they need to be prepared to manage changes in the field. Mathematics teachers already come together at meetings of professional societies such as the National
Council of Teachers of Mathematics (NCTM), its affiliated groups, or other organizations. These occasions can provide opportunities for professional development of the sort discussed above. For
example, portions of national or regional meetings of the NCTM could be organized into minicourses or institutes, without competing sessions being held at the same time. Professional development
needs to grow out of current activities: Professional meetings and other occasions when teachers come together to work on their practice should be used as opportunities for more serious and
substantive professional development than has commonly been available. Sustaining Professional Development Preparing to teach is a career-long activity. Teachers need to continue to learn. But rather
than being focused on isolated facts and skills, teacher learning needs to be generative. That is, what teachers learn needs to serve as a basis for them to continue to learn from their practice.
They need to see that practice as demanding continual review, analysis, and improvement. Studies of teacher change indicate that short-term, fragmented professional development is ineffective for
developing teaching proficiency.
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics More resources of all types—money, time, leadership, attention—need to be invested in professional development for teachers of mathematics, and
those resources already available could be used more wisely and productively. Each year a substantial amount of money is invested in professional development programs for teachers. Individual schools
and districts fund some programs locally. Others are sponsored and funded by state agencies, federal agencies, or professional organizations. Much of the time and money invested in such programs,
however, is not used effectively. Sponsors generally fund short-term, even one-shot, activities such as daylong workshops or two-day institutes that collectively do not form a cohesive and cumulative
program of professional development. Furthermore, these activities are often conducted by an array of professional developers with minimal qualifications in mathematics and mathematics teaching.
Professional development in mathematics needs to be sustained over time that is measured in years, not weeks or months, and it needs to involve a substantial amount of time each year. Our
recommendations to raise the level of professional development are as follows: Professional development in mathematics needs to be sustained over time that is measured in years, not weeks or months.
Local education authorities should give teachers support, including stipends and released time, for sustained professional development. Providers of professional development should know mathematics
and should know about students’ mathematical thinking, how mathematics is taught, and teachers’ thinking about mathematics and their own practice. Organizations and agencies that fund professional
development in mathematics should focus resources on multi-year, coherent programs. Resources of agencies at every level should be marshaled to support substan tial and sustained professional
development. Monitoring Progress Toward Mathematical Proficiency In this report we have set forth a variety of observations, conclusions, and recommendations that are designed to bring greater
coherence and balance to the learning and teaching of mathematics. In particular, we have described five strands of mathematical proficiency that should frame all efforts to improve school
mathematics. Over the past decades, various visions have been put forward for improving curriculum, instruction, and assessment in mathematics, and many of those ideas have been tried in schools.
Unfortunately, new programs are tried but
OCR for page 407
Adding + It Up: Helping Children Learn Mathematics then abandoned before their effectiveness has been well tested, and lessons learned from program evaluations are often lost. Although aspects of
mathematics proficiency have been studied, other aspects such as productive disposition have received less attention; and no one, including the National Assessment of Educational Progress (NAEP), has
studied the integrated portrait of mathematics proficiency set forth in this report. In order that efforts to improve U.S. school mathematics might be more cumulative and coordinated, we make the
following recommendation: An independent group of recognized standing should be constituted to assess the progress made in meeting the goal of mathematical proficiency for all U.S. schoolchildren.
Supporting the Development of Mathematical Proficiency The mathematics students need to learn today is not the same mathematics that their parents and grandparents needed to learn. Moreover,
mathematics is a domain no longer limited to a select few. All students need to be mathematically proficient to the levels discussed in this report. The mathematics of grades pre-K-8 today involves
much more than speed in pencil-and-paper arithmetic. Students need to understand mathematics, use it to solve problems, reason logically, compute fluently, and use it to make sense of their world.
For that to happen, each student will need to develop the strands of proficiency in an integrated fashion. No country—not even those performing highest on international surveys of mathematics
achievement—has attained the goal of mathematical proficiency for all its students. It is an extremely ambitious goal, and the United States will never reach it by continuing to tinker with the
controls of educational policy, pushing one button at a time. Adopting mathematics textbooks from other countries, testing teachers, holding students back a grade, putting schools under state
sanctions—none of these alone will advance school mathematics very far toward mathematical proficiency for all. Instead, coordinated, systematic, and sustained modifications will need to be made in
how school mathematics instruction has commonly proceeded, and support of new and different kinds will be required. Leadership and attention to the teaching of mathematics are needed in the
formulation and implementation of policies at all levels of the educational system. | {"url":"http://www.nap.edu/openbook.php?record_id=9822&page=407","timestamp":"2014-04-20T04:14:02Z","content_type":null,"content_length":"94928","record_id":"<urn:uuid:423048e1-b6aa-49a7-b32b-ef685b888b0f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with showing integral is differentiable
October 18th 2009, 06:34 AM #1
Sep 2009
f is continuous on x=[0,1].
Given that $\int_0^{\pi}\\xf(sin x)\ dx = \frac{\pi}{2}\int_0^{\pi}\\f(sinx) dx$
Show that $F(x)=\int_0^1\\f(x+t) dt$ is differentiable and that $F\prime(x)=f(x+1)-f(x)$.
i think the first relation has nothing to do here.
put $u=x+t$ in your integral and then $F(x)=\int_x^{x+1}f(u)\,du,$ and $f$ was given as continuous, so $F$ is differentiable and then $F'(x)=f(x+1)-f(x).$
October 18th 2009, 10:04 AM #2 | {"url":"http://mathhelpforum.com/calculus/108757-help-showing-integral-differentiable.html","timestamp":"2014-04-21T11:04:40Z","content_type":null,"content_length":"34450","record_id":"<urn:uuid:c6dcb105-89c4-4772-af9a-0f745870f8b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
NYJM Abstract - 11-4 - Terence Tao
Terence Tao
Global well-posedness and scattering for the higher-dimensional energy-critical nonlinear Schrödinger equation for radial data
Published: February 28, 2005
Keywords: Nonlinear Schrödinger equation, Strichartz estimates, Morawetz inequalities, spherical symmetry, energy bounds
Subject: 35Q55
In any dimension n ≧ 3, we show that spherically symmetric bounded energy solutions of the defocusing energy-critical nonlinear Schrödinger equation
i u[t] + Δ u = |u|^4/(n-2) u
in R × R^n exist globally and scatter to free solutions; this generalizes the three and four-dimensional results of Bourgain, 1999a and 1999b, and Grillakis, 2000. Furthermore we have bounds on
various spacetime norms of the solution which are of exponential type in the energy, improving on the tower-type bounds of Bourgain. In higher dimensions n ≧ 6 some new technical difficulties
arise because of the very low power of the nonlinearity.
The author is a Clay Prize Fellow and is supported by the Packard Foundation.
Author information
Department of Mathematics, UCLA, Los Angeles CA 90095-1555 | {"url":"http://www.emis.de/journals/NYJM/j/2005/11-4.html","timestamp":"2014-04-21T02:31:25Z","content_type":null,"content_length":"9459","record_id":"<urn:uuid:a019565a-c051-48d0-af4d-6cdb07861d59>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integral definition of gamma function problem
September 18th 2011, 11:17 PM #1
May 2010
Integral definition of gamma function problem
Hey guys got a tricky question (for me at least) that i'm not sure about. Calculus was never really my strong point.
Not sure how to use the math tags very well either so i just attached a picture of my exam revision sheet. I get the basic gist of integration by parts and Laplace Transform functions but I can't
seem to get this one going......
I seem to get Gamma Function Symbol (Can't remember the name) equals zero, now i know that's not right haha.
Some help would be great if you guys could manage.
EDIT: Attached file : )
Last edited by CaptainBlack; September 18th 2011 at 11:53 PM.
Re: Integral definition of the gamma function problem
Hey guys got a tricky question (for me at least) that i'm not sure about. Calculus was never really my strong point.
Not sure how to use the math tags very well either so i just attached a picture of my exam revision sheet. I get the basic gist of integration by parts and Laplace Transform functions but I can't
seem to get this one going......
I seem to get Gamma Function Symbol (Can't remember the name) equals zero, now i know that's not right haha.
Some help would be great if you guys could manage.
EDIT: Attached file : )
Why is the subject line "Laplace transform", thit is not an LT question.
For (a) try integration by parts:
$\Gamma(x)=\int_0^{\infty} t^{x-1}e^{-x}\; dt= \int_0^{\infty} [t] \left[t^{x-2}e^{-x}\right]\; dt$
If you have any problems with that post a follow up question in this thread.
Re: Integral definition of gamma function problem
Sorry for my late reply,
Thank you very much, a simple solution, I should have seen that straight away, funny how that works.
And I do apologise for the inaccurate title, I originally recognised the function as a Laplace Transform which is again, my mistake.
Re: Integral definition of gamma function problem
can you please put the solution steps
Re: Integral definition of gamma function problem
Re: Integral definition of gamma function problem
For the second problem, you know that $\Gamma (x)=(x-1)\Gamma(x-1)$ that means $\Gamma(x-1)=(x-2)\Gamma(x-2)$ and so on, therefore you get:
Thus, You get:
The only thing you have to do now is to prove that $\Gamma(1)=1$
September 18th 2011, 11:51 PM #2
Grand Panjandrum
Nov 2005
September 21st 2011, 12:28 AM #3
May 2010
November 14th 2011, 10:44 PM #4
Aug 2011
November 15th 2011, 01:03 AM #5
Grand Panjandrum
Nov 2005
November 15th 2011, 06:52 AM #6 | {"url":"http://mathhelpforum.com/calculus/188335-integral-definition-gamma-function-problem.html","timestamp":"2014-04-19T20:32:58Z","content_type":null,"content_length":"47867","record_id":"<urn:uuid:997b621c-7809-448f-beb0-de781fef05c1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Half Moon Bay Math Tutor
Find a Half Moon Bay Math Tutor
...I have extensive experience teaching international baccalaureate mathematics. I'm flexible in my teaching style, and will work with parents, schools and students to determine the format of
tutoring most likely to bring them success. In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems.
11 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I highly customize to the specific needs, learning styles, and personalities of my students. Furthermore, I keep their parents updated. I am strong in all areas in physics and math.
8 Subjects: including algebra 2, calculus, geometry, linear algebra
...Besides 25 years of software experience, I have taught part-time in the math and computer science department at San Jose State University. I love to see my students understand the principles
and apply them to solving problems and will try different ways to make these clear to them, depending on ...
23 Subjects: including probability, algebra 1, algebra 2, calculus
...A lot of people know Math, a lot of people can tutor Math, but for me it's about the individual needing help. I have worked with students from all ages, from pre-teen to people in their second/
third careers. Everyone has their own style of learning.
8 Subjects: including algebra 1, algebra 2, vocabulary, prealgebra
...My approach: Learning and especially private tutoring should be a fun, meaningful experience. And to achieve that goal for every student he or she must be met at his or her level. I have found
the best method for success mixes healthy doses of humor, creativity, and structure.
24 Subjects: including prealgebra, algebra 1, English, reading | {"url":"http://www.purplemath.com/Half_Moon_Bay_Math_tutors.php","timestamp":"2014-04-18T04:16:46Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:599d570a-57a2-4de7-9e60-4c37773f187e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate the Sine of an Angle
Because you spend a ton of time in pre-calculus working with trigonometric functions, you need to understand ratios. One important ratio in right triangles is the sine. The sine of an angle is
defined as the ratio of the opposite leg to the hypotenuse. In symbols, you write
Here’s what the ratio looks like:
In order to find the sine of an angle, you must know the lengths of the opposite side and the hypotenuse. You will always be given the lengths of two sides, but if the two sides aren’t the ones you
need to find a certain ratio, you can use the Pythagorean theorem to find the missing one. For example, to find the sine of angle F (sin F) in the figure, follow these steps:
1. Identify the hypotenuse.
Where’s the right angle? It’s
so side r, across from it, is the hypotenuse. You can label it Hyp.
2. Locate the opposite side.
Look at the angle in question, which is
here. Which side is across from it? Side f is the opposite leg. You can label it Opp.
3. Label the adjacent side.
The only side that’s left, side k, has to be the adjacent leg. You can label it Adj.
4. Locate the two sides that you use in the trig ratio.
Because you are finding the sine of
you need the opposite side and the hypotenuse. For this triangle, (leg)^2 + (leg)^2 = (hypotenuse)^2 becomes f^2 + k^2 = r^2. Plug in what you know to get f^2 + 7^2 = 14^2. When you solve this
for f, you get
5. Find the sine.
With the information from Step 4, you can find that
which reduces to | {"url":"http://www.dummies.com/how-to/content/how-to-calculate-the-sine-of-an-angle.html","timestamp":"2014-04-24T21:47:08Z","content_type":null,"content_length":"55654","record_id":"<urn:uuid:82f657fd-b9a4-4e3e-b7c6-fdaf3f9fb594>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00343-ip-10-147-4-33.ec2.internal.warc.gz"} |
Type Ia Supernovae Light Curves
Does anyone know the mathematical form of Type Ia supernovae light curves? I am trying to analyze supernovae data. I need to fit a function to the magnitude vs time data. So I require the
mathematical form for magnitude as a function of time. If anyone has any idea about that, or can suggest a way to bypass this problem PLEASE let me know. | {"url":"http://www.physicsforums.com/showthread.php?p=4251350","timestamp":"2014-04-18T13:46:32Z","content_type":null,"content_length":"33807","record_id":"<urn:uuid:ca9dae92-dfca-44f2-a0c6-60d500b8d02f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about quadratic formula/parabolas
December 28th 2005, 07:19 AM #1
Sep 2005
Question about quadratic formula/parabolas
The question is:
A parabola has zeros -6 and 2 and y-intercept -9.
a) Write its equation in factored form, expanded form and vertex form.
b)Use the expanded form and the quadratic formula to verify that the zeros are -6 and 2.
First, to write the equation in factored form I substituted y=-9, x=0 to find a.
Which therefore made the equation in factored form:
And it was therefore in expanded form:
However, in checking the zeros of this equation with the quadratic formula,
I could not get -6 and 2 for the zeros.
x=-bħsquare root of b^+4ac/2a
x=-3ħsquare root of 3x3+4(3/4)(-9)/2a
x=-3ħsquare root of 9-27/2a
x=-3ħsquare root of -18 /2a
Already I know that this is wrong.
Please help?
I must email this to my teacher before the end of the week.
You solved for the equation correctly but your quadratic equation is incorrect.
Eq: (3/4)x^2+3x-9
-bħsqrt(b^2 - 4*a*c)
-3ħsqrt(3^2 - 4*.75*-9)
-3ħsqrt(9 + 27)
The question is:
A parabola has zeros -6 and 2 and y-intercept -9.
a) Write its equation in factored form, expanded form and vertex form.
b)Use the expanded form and the quadratic formula to verify that the zeros are -6 and 2.
First, to write the equation in factored form I substituted y=-9, x=0 to find a.
Which therefore made the equation in factored form:
And it was therefore in expanded form:
However, in checking the zeros of this equation with the quadratic formula,
I could not get -6 and 2 for the zeros.
x=-bħsquare root of b^+4ac/2a
x=-3ħsquare root of 3x3+4(3/4)(-9)/2a
x=-3ħsquare root of 9-27/2a
x=-3ħsquare root of -18 /2a
Already I know that this is wrong.
Please help?
I must email this to my teacher before the end of the week.
The quadratic formula for the roots is:
$x=\frac{-b\pm {\sqrt{b^2-4ac}}}{2a}$
substituting $a=3/4,\ b=3,\ c=-9$ will give the required roots.
The question is:
A parabola has zeros -6 and 2 and y-intercept -9.
a) Write its equation in factored form, expanded form and vertex form.
b)Use the expanded form and the quadratic formula to verify that the zeros are -6 and 2.
First, to write the equation in factored form I substituted y=-9, x=0 to find a.
Which therefore made the equation in factored form:
And it was therefore in expanded form:
Umm, this is the first time I saw how a parabola is treated this way. It is unusual for me. I saw this yesterday and it intrigued me. Then a while ago, on my way home, I saw reason behind it.
y = Ax^2 +Bx +C
is a parabola. It being a vertical parabola, it
is also a function of x, or y=f(x), because an imaginary vertical line will cross this parabola at only one point per position of the vertical line along the x-axis.
The zeroes at -6 and 2 mean they are zeroes of the function. And that was the reason why y = a(x+6)(x-2), or why it is a factored form of the vertical parabola. Zeroes---factors---polynomial.
I am used to [y = a(x-h)^2 +k] for a vertical parabola.
[ y = a(x-s)(x-t), whatever s and t mean, is new to me, as I have said. ]
where (h,k) is the vertex.
Or, it is more like: (y-k) = a(x-h)^2 -----------(i)
I guess that is what you call above as the vertex form of the parabola.
So you got the factored form and the expanded form. What about the vertex form?
Let us do it.
As you can see from the righthand side of Eq.(i), the (x-h)^2 is a perfect square, or (x-h) is multiplied by itself. That means we just "complete the square" of the x-terms in any of the factored
or expanded forms to get the vertex form.
Let's use, say, the factored form:
y = (3/4)(x+6)(x-2)
To "complete the square", we make sure first that the coefficient of the x^2 term is 1 only,
y = (3/4)[x^2 +4x -12]
y = (3/4)[(x^2 +4x) -12]
y = (3/4)[(x^2 +4x +(4/2)^2 -(4/2)^2 -12]
y = (3/4)[(x+2)^2 -4 -12]
y = (3/4)[(x+2)^2 -16]
y = (3/4)(x+2)^2 -(3/4)(16)
y = (3/4)(x+2)^2 -12 ------------------vertex form, or,
(y +12) = (3/4)(x+2)^2 ------------vertex form.
That means the vertex is (-2,-12). --------***
If you did not know the factored form way, by using the vertex form you can also get the equation of the parabola:
(y-k) = a(x-h)^2
at point (-6,0)....(0 -k) = a(-6 -h)^2 ----(1)
at point (2,0).....(0 -k) = a(2 -h)^2 --------(2)
at point (0,-9)....(-9 -k) = a(0 -h)^2 -----------(3)
3 equations, 3 unknowns, solvable.
Take care only when you equate the (-6 -h) of (1) to the (2 -h) of (2).
(-6 -h) = (2 -h)
Cannot be.
But (-6 -h)^2 is also [-(6+h)]^2 = (6+h)^2, so,
a(-6 -h)^2 = a(2 -h)^2
(6+h)^2 = (2-h)^2
6+h = 2-h
h +h = 2 -6
2h = -4
h = -2 -----------the x-coordinate of the vertex.
Last edited by ticbol; December 29th 2005 at 11:38 AM.
Thank-you so much to all of you; your advise was very insightful.
December 28th 2005, 08:18 AM #2
December 28th 2005, 08:29 AM #3
Grand Panjandrum
Nov 2005
December 29th 2005, 02:35 AM #4
MHF Contributor
Apr 2005
December 29th 2005, 05:33 AM #5
Sep 2005 | {"url":"http://mathhelpforum.com/algebra/1510-question-about-quadratic-formula-parabolas.html","timestamp":"2014-04-18T08:44:19Z","content_type":null,"content_length":"47182","record_id":"<urn:uuid:4f7dd57c-2dde-4627-b533-8e5e934aa4f4>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kamenev-Type Oscillation Criteria of Second-Order Nonlinear Dynamic Equations on Time Scales
Discrete Dynamics in Nature and Society
Volume 2013 (2013), Article ID 315158, 12 pages
Research Article
Kamenev-Type Oscillation Criteria of Second-Order Nonlinear Dynamic Equations on Time Scales
^1Department of Humanities & Education, Shunde Polytechnic, Foshan, Guangdong 528333, China
^2School of Mathematics & Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong 510275, China
Received 20 November 2012; Accepted 14 December 2012
Academic Editor: Elena Braverman
Copyright © 2013 Yang-Cong Qiu and Qi-Ru Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Using functions from some function classes and a generalized Riccati technique, we establish Kamenev-type oscillation criteria for second-order nonlinear dynamic equations on time scales of the form
. Two examples are included to show the significance of the results.
1. Introduction
In this paper, we study the second-order nonlinear dynamic equation on a time scale .
Throughout this paper, we will assume that(C1),(C2), where is a fixed positive constant,(C3), and there exist such that for all ,(C4).
Preliminaries about time scale calculus can be found in [1–3], and hence we omit them here. Note that for some typical time scales, we have the following properties, respectively:
(1) , we have
(2) , we have
(3) , we have
(4) , we have
Without loss of generality, we assume throughout that since we are interested in extending oscillation criteria for the typical time scales above.
Definition 1. A solution of (1) is said to have a generalized zero at if , and it is said to be nonoscillatory on if there exists such that for all . Otherwise, it is oscillatory. Equation (1) is
said to be oscillatory if all solutions of (1) are oscillatory.
The theory of time scales, which has recently received a lot of attention, was introduced by Stefan Hilger in his Ph.D. thesis [4] in 1988 in order to unify continuous and discrete analysis; see also
[5]. In recent years, there has been much research activity concerning the oscillation and nonoscillation of solutions of dynamic equations on time scales; for example, see [1–28] and the references
therein. In Došlý and Hilger [10], the authors considered the second-order dynamic equation and gave necessary and sufficient conditions for the oscillation of all solutions on unbounded time scales.
In Del Medico and Kong [8, 9], the authors employed the following Riccati transformation and gave sufficient conditions for Kamenev-type oscillation criteria of (6) on a measure chain.
In Wang [25], the author considered second-order nonlinear damped differential equation used the following generalized Riccati transformations where , and gave a new oscillation criteria of (8). In
Huang and Wang [16], the authors considered second-order nonlinear dynamic equation on time scales By using a similar generalized Riccati transformation which is more general than (7) where , , the
authors extended the results in Del Medico and Kong [8, 9] and established some new Kamenev-type oscillation criteria.
In this paper, we will use functions in some function classes and a similar generalized Riccati transformation as (11) and was used in [25, 26] for nonlinear differential equations, and establish
Kamenev-type oscillation criteria for (1) in Section 2. Finally, in Section 3, two examples are included to show the significance of the results.
For simplicity, throughout this paper, we denote , where , and are denoted similarly.
2. Kamenev-Type Criteria
In this section we establish Kamenev-type criteria for oscillation of (1). Our approach to oscillation problems of (1) is based largely on the application of the Riccati transformation. Now, we give
the first lemma.
Lemma 2. Assume that (C1)–(C4) hold and that there exists a function such that . Also, suppose that is a solution of (1) satisfies for with . For , define where , , and for . Then, satisfies where =
+ , = , = .
Proof. By (C3), we see that and are both positive, both negative, or both zero. When , which implies that , it follows that When , which implies that , it follows that When , which implies that and ,
it follows that
Hence, we always have that is, (13) holds. Then, differentiating (12) and using (1), it follows that that is, (14) holds. Lemma 2 is proved.
Remark 3. In Lemma 2, the condition ensures that the coefficient of in (14) is always negative. The condition is obvious and easy to be fulfilled. For example, when for all , we have , by (C3), we
see that , and when , the condition is also fulfilled.
Let and . For any function : , denote by and the partial derivatives of with respect to and , respectively. For , denote by the space of functions which are integrable on any compact subset of .
Define These function classes will be used throughout this paper. Now, we are in a position to give our first theorem.
Theorem 4. Assume that (C1)–(C4) hold and that there exists a function such that . Also, suppose that there exist and such that and for any , where is defined as before, and Then, (1) is oscillatory.
Proof. Assume that (1) is not oscillatory. Without loss of generality, we may assume there exists such that for . Let be defined by (12). Then, by Lemma 2, (13) and (14) hold.
For simplicity in the following, we let , and and omit the arguments in the integrals. For ,
Since on , we see that . From and (C3), we have
Multiplying (14), where is replaced by , by and integrating it with respect to from to with and , we obtain Noting that , by the integration by parts formula, we have
Since on , from (13) we see that for , For , , and , from (24), we have
For , , and , from (24), we have Therefore, for all , , we have Then, from (26), (27), and (30), we obtain that for and , Hence, which contradicts (21) and completes the proof.
Remark 5. If we change the condition in the definition of with a stronger one , (24) in the proof of Theorem 4 will be changed with Then, the definition of can be simplified as
In the sequel, we define
When , by (C3), we see that and (1) is simplified as
Now, we have the following theorem, but we should note that this result does not apply to the case where all points in are right dense.
Theorem 6. Assume that (C1)–(C4) with hold and that there exists a function such that . Let , and be defined by (35) and (36). Then, (37) is oscillatory provided there exists , , such that for any ,
one of the following holds(i) and (ii) and (iii) and where , is defined as before, and
Proof. Assume that (37) is not oscillatory. Without loss of generality, we may assume there exists such that for . Let be defined by (12) with . Then, by Lemma 2, (13) and (14) hold for . So, we have
where and are defined as in Lemma 2.
For simplicity in the following, we let , and and omit the arguments in the integrals. Multiplying , where is replaced by , by and integrating it with respect to from to and then using the
integration by parts formula, we have that For , Hence,
Furthermore, for , , and , For , , and , Hence, for all , , we have
From (42), (44), and (47), we have
For , implies that Hence,
Assume that condition (i) holds. Let in (50). Then, we obtain Taking the as on both sides, we have which contradicts (38).
The conclusions with conditions (ii) and (iii) can be proved similarly. We omit the details. The proof is complete.
When , Theorems 4 and 6 can be simplified as the following corollaries, respectively.
Corollary 7. Assume that (C1)–(C4) hold and that there exists a function such that . Also, suppose that there exists such that for any , Then, (1) is oscillatory.
Corollary 8. Assume that (C1)–(C4) with hold and that there exists a function such that . Let , and be defined by (35) and (36). Then, (37) is oscillatory provided that there exists , , such that for
any , one of the following holds(i) and (ii) and (iii) and
Remark 9. When and , Theorems 4 and 6 reduce to [16, Theorems 2.1 and 2.2], respectively. When , , , and , Theorems 4 and 6 reduce to [8, Theorems 2.1 and 2.2], respectively.
3. Examples
In this section, we will show the application of our oscillation criteria in two examples. We first give an example to demonstrate Theorem 4 (or Corollary 7).
Example 10. Consider the equation where , , , and , so we have , , and . Let and , we have
(1) , That is, (53) holds. By Corollary 7, we see that (57) is oscillatory;
(2) , that is, (53) holds. By Corollary 7, we see that (57) is oscillatory;
(3) , , that is, (53) holds. By Corollary 7, we see that (57) is oscillatory;
(4) , that is, (53) holds. By Corollary 7, we see that (57) is oscillatory.
The second example illustrates Theorem 6.
Example 11. Consider the equation where , , , and , so we have , . Let , we have
(1) , let . When is sufficiently large, there exists such that that is, in Theorem 6, (i) and (38) hold. Then, (62) is oscillatory;
(2) , , let . When is sufficiently large, there exists such that | {"url":"http://www.hindawi.com/journals/ddns/2013/315158/","timestamp":"2014-04-16T23:03:24Z","content_type":null,"content_length":"1046275","record_id":"<urn:uuid:fb6a33a9-b5ef-4f92-9556-05df7346522f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
2272 -- Bullseye
Time Limit: 1000MS Memory Limit: 65536K
Total Submissions: 4629 Accepted: 2188
A simple dartboard consists of a flat, circular piece of cork with concentric rings drawn on it. Darts are thrown at the board by players in an attempt to hit the center of the dartboard (the
Bullseye). The region between each pair of rings (or the center and the first ring) represents a certain point value. The closer the region is to the center of the dartboard, the more points the
region is worth, as shown in the diagram below:
Ring radii are at 3", 6", 9", 12" and 15" (the Bullseye has a diameter of 6"). A game of Simple Darts between two players is played as follows. The first player throws 3 darts at the board. A score
is computed by adding up the point values of each region that a dart lands in. The darts are removed. The second player throws 3 darts at the board; the score for player two is computed the same way
as it is for player one. The player with the higher score wins.
For this problem, you are to write a program that computes the scores for two players, and determine who, if anyone, wins the game. If a dart lands exactly on a ring (region boundary), the higher
point value is awarded. Any dart outside the outer ring receives no points. For the purposes of this problem, you can assume that a dart has an infinitely fine point and can not land paritially on a
ring; it is either on the ring or it is not on the ring. Standard double precision floating point operations will be should be used.
Input consists of 1 or more datasets. A dataset is a line with 12 double-precision values separated by spaces. Each pair of values represents the X and Y distances respectively of a dart from the
center of the board in inches. (the center is located at X = 0, Y = 0. The range of values are: -20.0<=X, Y<=20.0. Player one's darts are represented by the first 3 pairs of values, and player two's
by the last 3 pairs of values. Input is terminated by the first value of a dataset being -100.
For each dataset, print a line of the form:
SCORE: N to M, PLAYER P WINS.
SCORE: N to M, TIE.
N is player one's score, and M is player two's score. P is either 1 or 2 depending on which player wins. All values are non-negative integers.
Recall: r
= x
+ y
where r is the radius, and (x, y) are the coordinates of a point on the circle.
Sample Input
-9 0 0 -4.5 -2 2 9 0 0 4.5 2 -2
-19.0 19.0 0 0 0 0 3 3 6 6 12 12
-100 0 0 0 0 0 0 0 0 0 0 0
Sample Output
SCORE: 240 to 240, TIE.
SCORE: 200 to 140, PLAYER 1 WINS. | {"url":"http://poj.org/problem?id=2272","timestamp":"2014-04-19T11:56:27Z","content_type":null,"content_length":"8049","record_id":"<urn:uuid:b14e4380-078d-4459-be94-3f76beced73f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reliable and Restricted Quickest Path Problems
• In a dynamic network, the quickest path problem asks for a path minimizing the time needed to send a given amount of flow from source to sink along this path. In practical settings, for example
in evacuation or transportation planning, the reliability of network arcs depends on the specific scenario of interest. In this circumstance, the question of finding a quickest path among all
those having at least a desired path reliability arises. In this article, this reliable quickest path problem is solved by transforming it to the restricted quickest path problem. In the latter,
each arc is associated a nonnegative cost value and the goal is to find a quickest path among those not exceeding a predefined budget with respect to the overall (additive) cost value. For both,
the restricted and reliable quickest path problem, pseudopolynomial exact algorithms and fully polynomial-time approximation schemes are proposed.
Author: Stefan Ruzika, Markus Thiemann
URN (permanent link): urn:nbn:de:hbz:386-kluedo-16851
Serie (Series number): Report in Wirtschaftsmathematik (WIMA Report) (135)
Document Type: Report
Language of publication: English
Year of Completion: 2011
Year of Publication: 2011
Publishing Institute: Technische Universität Kaiserslautern
Tag: Dynamic Network Flows ; FPTAS ; Pseudopolynomial-Time Algorithm; Restricted Shortest Path
Faculties / Organisational entities: Fachbereich Mathematik
DDC-Cassification: 510 Mathematik | {"url":"https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/2288","timestamp":"2014-04-17T03:58:23Z","content_type":null,"content_length":"19075","record_id":"<urn:uuid:9274916c-7015-44f1-a12c-653a56c9a212>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lipschitze functions
November 28th 2011, 06:39 PM #1
Aug 2010
Lipschitze functions
Show that a continuous function f on [a,b] is Lipschitz if it's upper and lower derivative
(Dini derivatives) are bounded on [a,b]
I have no clue , i appreciate answering the question, thanking all of you.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-geometry/192965-lipschitze-functions.html","timestamp":"2014-04-18T02:02:44Z","content_type":null,"content_length":"29000","record_id":"<urn:uuid:d116c7f9-e397-45d5-853f-896469315c2d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Simple Vector Problem - Finding the x-component
1. The problem statement, all variables and given/known data
Find x- and y-components of the following vectors
v = 7.0 cm/s, negative x-direction
2. Relevant equations
Vx = |V|Cos(Theta)
Vy = |v|Sin(Theta)
3. The attempt at a solution
I'm a bit stuck here. I slotted in 7cm/s for |v| but how do I find the components if I'm not given an angle or graphical representation? | {"url":"http://www.physicsforums.com/showpost.php?p=2231235&postcount=3","timestamp":"2014-04-19T17:36:37Z","content_type":null,"content_length":"7537","record_id":"<urn:uuid:bec18bfc-464b-4ac5-b609-bef99b33f13a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Color Arrangements on the Face of a Cube
Date: 11/23/2001 at 12:51:14
From: Jhonen
Subject: Color Arrangements on the Face of a Cube
Dear Dr. Math,
I've heard the familiar problem about whether or not it is possible to
use 3, 4, etc. colors to paint the faces of a cube such that no face
painted one color is adjacent to another face of the same color.
Here's another problem in the same vein which I'm not sure how to
approach other than by trying to write out every possible combination:
Suppose you have three colors in which to paint the face of a cube,
either red, blue, or yellow. How many different color patterns are
there if each face of the cube must be painted red, blue, or yellow
(assuming that two patterns are the same if the two cubes painted
with them can be made to look the same by simply rotating one or the
other. E.g. there is only one pattern of say, five red faces and one
It seems that there must be patterns of only one color, and six more
of five of the same colors and one different one. Further, it seems as
if there are two possibilities for the case of four sides of the same
color and two sides of one different color, two possibilities for
three sides of one color, and three of another. It seems that if one
allows cubes of only two colors, that there are: 8*2*3 = 48 different
possible color patterns.
Is there a better way to approach this problem, and how can the
conditions stated in the problem, which allow for each cube to have a
color pattern of up to three colors, be dealt with?
Date: 11/23/2001 at 18:31:48
From: Doctor Anthony
Subject: Re: Color Arrangements on the Face of a Cube
This is a problem requiring the use of Burnside's lemma. You consider
the 24 symmetries of a cube and sum all those symmetries that keep
colours fixed. The number of non-equivalent configurations is then the
total sum divided by the order of the group (24 in this case).
We first find the cycle index of the group of FACE permutations
induced by the rotational symmetries of the cube.
Looking down on the cube, label the top face 1, the bottom face 2 and
the side faces 3, 4, 5, 6 (clockwise).
You should hold a cube and follow the way the cycle index is
calculated as described below. The notation (1)(23)(456) =
(x1)(x2)(x3) means that we have a permutation of 3 disjoint cycles
in which face 1 remains fixed, face 2 moves to face 3 and 3 moves to
face 2, face 4 moves to 5, 5 moves to 6, and 6 moves to 4. (This is
not a possible permutation for our cube, it is just to illustrate the
notation.) We now calculate the cycle index.
(1) e = (1)(2)(3)(4)(5)(6); index = (x1)^6
(2) 3 permutations like (1)(2)(35)(46); index 3(x1)^2.(x2)^2
(3) 3 permutations like (1)(2)(3456); index 3(x1)^2.(x4)
(4) 3 further as above but counterclockwise; index 3(x1)^2.(x4)
(5) 6 permutations like (15)(23)(46); index 6(x2)^3
(6) 4 permutations like (154)(236); net index 4(x3)^2
(7) 4 further as above but counterclockwise; net index 4(x3)^2
Then the cycle index is
P[x1,x2,...x6] =(1/24)[x1^6 + 3x1^2.x2^2 + 6x2^3 + 6x1^2.x4 + 8x3^2]
and the pattern inventory for these configurations is given by the
generating function:
(I shall use r = red, b = blue, y = yellow as the three colours.)
f(r,b,y) = (1/24)[(r+b+y)^6 + 3(r+b+y)^2.(r^2+b^2+y^2)^2
+ 6(r^2+b^2+y^2)^3 + 6(r+b+y)^2.(r^4+b^4+y^4)
+ 8(r^3+b^3+y^3)^2]
and putting r = 1, b = 1, y = 1 this gives
= (1/24)[3^6 + 3(3^2)(3^2) + 6(3^3) + 6(3^2)(3) + 8(3^2)]
= (1/24)[729 + 243 + 162 + 162 + 72]
= 1368/24
= 57
So there are 57 non-equivalent configurations.
- Doctor Anthony, The Math Forum
Date: 09/29/2008 at 18:40:06
From: Gopal
Subject: Distinct colored cubes
I read your article on painting the faces of a cube but I couldn't
understand how it was done. Can we somehow use a combination formula
to do it?
I started with six of the same color, than five of the same color,
then four of same color and so on. As I moved down the chain I kept
record of the number of distinct cubes. So with all the same color
there are 3 distinct cubes. With five the same color there are 6
distinct possibilities. I got stuck when I had three of the same
color painted.
Date: 09/30/2008 at 05:48:40
From: Doctor Jacques
Subject: Re: Distinct colored cubes
Hi Gopal,
The solution of this problem requires the use of a theorem of
Burnside; that theorem is part of a discipline known as group
theory. I don't know if you are familiar with group theory: I will
give you the practical method of computing the result, and some kind
of "plausibility argument" to show you why it works (this is not a
proof). If you know some group theory and want the actual proof,
please write back.
To illustrate the problem, I will start with a simpler version. We
will consider a board of 4 squares:
| 1 | 2 |
| 3 | 4 |
and try to count the number of possible ways of coloring the 4
squares in two colors, say black and white, where two colorings are
considered the same if they can be obtained from each other by a
symmetry of the whole square.
If we ignore symmetries, we can color the square in 2^4 = 16 ways,
because each of the 4 squares can be colored in two ways. (If we had
k colors available, there would be k^4 possible colorings). This
means that, if we take symmetries into account, we can only reduce
that number.
The first step is to define precisely what are the symmetries, or
operations, we want to consider. Those are the permutations of the 4
squares that correspond to a physical movement of the board
(rotation or flipping). Let us enumerate them.
We have the permutation that does nothing, i.e., that leaves every
square in its place. We will denote that operation by I (for
Identity). You might wonder why we bother to include it; I will come
to that later.
Next, we have the rotations by 90° in either direction. We will call
those R (for Right) and L (for Left). For example, R corresponds to
the permutation:
1 -> 2 -> 4 -> 3 -> 1
which is usually written as (1243).
We have the reflections across the horizontal and vertical medians,
we will call these H and V. We have:
H = (13)(24)
V = (12)(34)
We have the reflections across the diagonals, we will call these U
and D, with:
U = (14)(2)(3)
D = (1)(23)(4)
(This means, for example, that D leaves 1 and 4 fixed, and
interchanges 2 and 3)
Finally, we have the rotation by 180°, which we call C:
C = (14)(23)
In total, we have 8 operations: I, R, L, H, V, U, D, C.
This set of operations (we will call it G) has some interesting
If A and B are two operations, then, by executing A followed by B
(we will call that operation AB), we get another operation of the
set. This is a requirement for symmetries, because if neither A and
B changes the color of any square, then the same must be true for
AB. For example, if we execute H followed by V, the squares move as
Initial position 1 2 3 4
After H 3 4 1 2
After HV 4 3 2 1
This means that HV sends 1 to 4, 2 to 3, 3 to 2, and 4 to 1; we can
write this as (14)(23), which is the same as C.
Now, if, for example, we execute H twice, then each square goes back
to its initial position, which is the operation we denoted by I (the
one that does nothing). This explains why we need to include I in
our set. Note that we would write I as (1)(2)(3)(4).
This set of operations is an example of what is called a group (in
general, a group needs to satisfy additional requirements, but these
are always satisfied for sets of permutations of a finite set).
After these preliminaries, we can go back to the problem of counting
the colorings of the square. We want to count the number of distinct
colorings of the square, in such a way that two colorings are
considered the same if there is some operation in G that transforms
one into the other. In other words, we want to partition the
2^4 = 16 colorings into sets (called orbits) in such a way that two
colorings belong to the same orbit if there is an operation in G
that sends one to the other. We are interested in the number of
The intuitive approach is to count separately the number of distinct
colorings invariant for each operation of G, and to take the
average. Burnside's theorem essentially says that this gives the
right result; as I said before, if you know some group theory and
want to discuss this further, please write back.
Let us start with the simplest case, namely the identity
I = (1)(2)(3)(4). In this case, all 16 colorings are invariant under
I (because I moves nothing), and we have 16 distinct colorings.
For the operation R = (1243), as soon as we choose a color for
square 1, all the colors of the other squares are fixed. Indeed, R
sends square 1 to square 2, which must therefore have the same color
as square 1. In the same way, square 2 is sent to square 3, and, by
continuing the argument, we see that all the squares must have the
same color. This means that there are only two possibilities,
depending on the color we choose for square 1 : those are the
colorings where all the squares have the same color.
For the operation H = (13)(24), we can freely choose the colors for
squares 1 and 2, and this forces the colors of squares 3 (=1) and 4
(=2). This means that we have 2^2 = 4 distinct colorings invariant
under H.
At this point, we can see a pattern emerging. For example, H is
written as (13)(24), i.e., as 2 cycles. This means that the sets
{1,3} and {2,4} are separately invariant under H: each element of
one of these sets (cycles) is sent by H to another element of the
same cycle. This implies that elements of each cycle must have the
same color; once you select a color for one element, the colors of
the other elements of the set are fixed. You can only choose colors
for one element of each cycle, and the number of colorings invariant
under the operation is 2^t, where t is the number of cycles. Note
that you must write down all the cycles, including those of 1
element, like we did for I (in that case, there were 4 cycles, and
the number of colorings was 2^4).
We should do this for all the operations in G. However, we can
simplify the problem a little, by noting that some operations, like
H and V, or D and U, are "similar" in a sense that should be made
precise. The important point is that similar operations have the
same number of cycles, and will therefore yield the same number of
colorings. We can therefore compute the number of colorings for one
set of each class of similar operations, and multiply by the number
of operations in the class. This gives the following table:
Operations N t 2^t N*2^t
I 1 4 16 16
L,R 2 1 2 4
H,V 2 2 4 8
U,D 2 3 8 16
C 1 2 4 4
Total 8 48
In this table, N is the number of operations in each class, t is the
number of cycles in each operation, 2^t is the number of distinct
colorings for that operation, and N*2^t is the total number of
colorings to take into account for the class.
As I said before, the answer to the problem is the average of all
the distinct colorings. As there are 8 operations, this average is
48/8 = 6 : there are 6 distinct colorings of the square. Using the
order [1,2,3,4], these colorings are:
BBBB, WWWW, BBWW, BWWB, BWWW, WBBB
In this case, it would have been easy to obtain those colorings by
If we had k colors instead of 2, we could repeat the same argument.
However, in this case, there would be k possible choices for each
square, and we would have to replace 2^t by k^t in the above table.
This would give the number of colorings as:
(k^4 + 2k^3 + 3k^2 + 2k)/8
This shows, for example, that, with 3 colors, there are 21 distinct
colorings (this would already be harder to obtain by inspection).
Now, we can use the same argument for the colorings of a cube with k
colors. In this case, there are 24 operations, which are listed,
with their cycle structure, in the article you refer to.
We can make a similar table. In this case, the second column will
contain an example of cycle structure for each class.
Class Structure N t N*k^t
1 (1)(2)(3)(4)(5)(6) 1 6 k^6
2 (1)(2)(35)(46) 3 4 3k^4
3 (1)(2)(3456) 3 3 3k^3
4 (1)(2)(3654) 3 3 3k^3
5 (15)(23)(46) 6 3 6k^3
6 (154)(236) 4 2 4k^2
7 (145)(263) 4 2 4k^2
Total 24 k^6+3k^4+12k^3+8k^2
The operations are as follows:
* Class 1 only contains the identity.
* Class 2 contains the rotations of 180° through an axis going
through the centers of two opposite faces.
* Classes 3 and 4 contain rotations of 90° through the same axes.
* Class 5 contains rotations of 180° through the line joining the
midpoints of two opposite edges.
* Classes 6 and 7 contains rotations of 120° through an axis joining
two opposite vertices.
This gives the total number of distinct colorings as:
In the particular case of 3 colors (k=3), we get 57 colorings.
Please feel free to write back if you require further assistance.
- Doctor Jacques, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/56240.html","timestamp":"2014-04-20T16:18:00Z","content_type":null,"content_length":"19068","record_id":"<urn:uuid:d28b7f74-ab0f-4829-847f-d1d768886069>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
oint Presentations
Presentation Summary : DEDUCTIVE vs. INDUCTIVE REASONING Section 1.1 Problem Solving Logic – The science of correct reasoning. Reasoning – The drawing of inferences or conclusions from ...
Source : http://math.la.asu.edu/%7Etdalesan/mat142/notes/sec1.1.ppt
Deductive Vs Inductive Reasoning PPT - College of ... PPT
Presentation Summary : Formal Versus Informal Logic Deductive Versus Inductive Forms of Reasoning Two basic categories of human reasoning Deduction: reasoning from general premises, which ...
Source : http://commfaculty.fullerton.edu/rgass/235%20Spring%202009/Deduction%20Vs.%20Induction.ppt
DEDUCTIVE vs. INDUCTIVE REASONING - Arizona State University PPT
Presentation Summary : DEDUCTIVE vs. INDUCTIVE REASONING Section 1.1 Problem Solving Logic – The science of correct reasoning. Reasoning – The drawing of inferences or conclusions from ...
Source : http://www.asu.edu/courses/mat142ej/logic/powerpoints/deductive_vs_induction_dalesandro.ppt
Inductive vs. Deductive Reasoning - EagleNetwork - home PPT
Presentation Summary : Inductive vs. Deductive Reasoning Deductive Reasoning Starts with a general rule (a premise) which we know to be true. Then, from that rule, we make a true conclusion ...
Source : http://eaglenetwork.wikispaces.com/file/view/inductive+vs+deductive+reasoning.ppt
Deductive vs Inductive Reasoning - Arizona State University PPT
Presentation Summary : Deductive vs. Inductive Reasoning Objectives Use a Venn diagram to determine the validity of an argument. Complete a pattern with the most likely possible next item.
Source : http://www.asu.edu/courses/mat142ej/logic/powerpoints/deductive_vs_inductive.ppt
Presentation Summary : Inductive Model Also known as guided discovery Teacher’s role is to provide examples that illustrate the content and then guide students’ efforts to find patterns ...
Source : http://www.pearsoncustom.com/ufl_ctsm/ppts/inductive_model_prese.ppt
Deductive Reasoning - Quia PPT
Presentation Summary : Deductive and Inductive Reasoning EQ: What clues indicate deductive and inductive reasoning within a text? Additional Activators Bones (video clip) “The Sweetest ...
Source : http://www.quia.com/files/quia/users/schnellet/Reasoning/Inductive-Deductive
Inductive and Deductive Reasoning - baumherclasses - home PPT
Presentation Summary : Title: Inductive and Deductive Reasoning Author: KKONNICK Last modified by: PBAUMHER Created Date: 2/15/2008 10:47:29 PM Document presentation format
Source : http://baumherclasses.wikispaces.com/file/view/Unit%202.2%20Logic.ppt
Presentation Summary : DEDUCTIVE REASONING We reason deductively when we draw a conclusion from a set of given facts using the laws of logic Inspector Garble DEDUCTIVE REASONING If the ...
Source : http://faculty.bucks.edu/leutwyle/Concepts%20I/Uncle%20Alfred.ppt
Inductive vs. Deductive Reasoning - McGavockEnglish1 - home PPT
Presentation Summary : Inductive vs. Deductive Reasoning Descriptions and examples Types of Reasoning Deductive reasoning goes from general to specific Inductive reasoning goes from ...
Source : http://mcgavockenglish1.wikispaces.com/file/view/Inductive+vs+Deductive+Reasoning.ppt
Inductive and Deductive Reasoning - Riverdale High School PPT
Presentation Summary : Inductive Reasoning: Works the other way around. Moves from premises that state specific facts and observations to a broader generalization or conclusion drawn from ...
Source : http://www.rhs.rcs.k12.tn.us/teachers/rathmannk/documents/Deductive_Inductivereasoning_000.pptx
Inductive and deductive approaches to grammar in second ... PPT
Presentation Summary : Title: Inductive and deductive approaches to grammar in second language learning: process, product and students’ perceptions Author: bb Last modified by
Source : http://www.caslt.org/pdf/PowerPoints_CERRBALColloquium2008/CERRBAL_GladysJean.ppt
Inductive Thinking - User Homepages PPT
Presentation Summary : Inductive Thinking Inductive arguments are those in which the premises are intended to provide support, but not conclusive evidence, for the conclusion.
Source : http://users.ipfw.edu/caseldij/PowerPoint/Inductive%20Thinking.ppt
Patterns of Deductive Thinking - User Homepages PPT
Presentation Summary : Patterns of Deductive Thinking In deductive thinking we reason from a broad claim to some specific conclusion that can be drawn from it. In inductive thinking we ...
Source : http://users.ipfw.edu/caseldij/PowerPoint/Patterns%20of%20Deductive%20Thinking.ppt
Deductive Reasoning - knoxhealthscience / FrontPage PPT
Presentation Summary : Deductive Vs. Inductive Induction is usually described as moving from the specific to the general, while deduction begins with the general and ends with the specific.
Source : http://knoxhealthscience.pbworks.com/f/deductive_reasoning.ppt
Presentation Summary : Inductive vs. Deductive Reasoning Inductive Reasoning Real world inductive inferences Reasoning under Uncertainty Bayes Rule Do people reason like Bayes rule ...
Source : http://psiexp.ss.uci.edu/research/teachingP140C/Lectures2010/2010_Reasoning.ppt
Presentation Summary : Deductive and Inductive Arguments In this tutorial you will learn to distinguish deductive arguments from inductive arguments. Go to next slide
Source : http://highered.mcgraw-hill.com/sites/dl/free/0767417399/34024/chap3a.ppt
Inductive + Deductive Logic PPT
Presentation Summary : Inductive & Deductive Logic Kirszner & Mandell White and Billings Deductive Reasoning Deductive Reasoning proceeds from a general premise or assumption to a specific ...
Source : http://blogs.muskegonisd.org/frickewi/files/2009/05/Inductive-Deductive-Logic.ppt
3 Basic Types of Reasoning PPT
Presentation Summary : 2 Basic Types of Reasoning Deductive Inductive Deductive Reasoning Attempts to provide sufficient (or conclusive) evidence for the conclusion Deductive reasoning can ...
Source : http://www2.ivcc.edu/jbeyer/PHL%201005/CT%20Deductive.ppt
Inductive and Deductive Reasoning - 2010yeagleyenglish ... PPT
Presentation Summary : Title: Inductive and Deductive Reasoning Author: Sarah Cice Last modified by: Sarah Cice Created Date: 12/16/2009 8:35:56 PM Document presentation format
Source : http://2010yeagleyenglish.pbworks.com/f/InductiveandDeductiveReasoning.ppt
Presentation Summary : Chapter 3.a Deductive and Inductive Arguments In this tutorial you will learn to distinguish deductive arguments from inductive arguments. All bats are mammals.
Source : http://highered.mcgraw-hill.com/sites/dl/free/0072879599/135758/bassham_powerpoint_ch03a.ppt
Presentation Summary : Chapter 2: Accounting Theory & Research Scientific method Deductive Inductive Positive accounting research Art or Science? Directions in accounting research ...
Source : http://www.swlearning.com/accounting/wolk/sixth_edition/powerpoint/ch02.ppt
Inductive v s Deductive Reasoning PPT
Presentation Summary : Inductive Reasoning. Focus on specific information. Search for patterns or connections in patterns. Construct a general statement which explains what you have observed
Source : http://images.pcmac.org/SiSFiles/Schools/TN/GreenevilleCity/GreenevilleHigh/Uploads/Presentations/Inductive%20v%20s%20Deductive%20Reasoning09.pptx
If you find powerpoint presentation a copyright It is important to understand and respect the copyright rules of the author. Please do not download if you find presentation copyright. If you find a
presentation that is using one of your presentation without permission, contact us immidiately at | {"url":"http://www.xpowerpoint.com/ppt/deductive-inductive.html","timestamp":"2014-04-21T09:38:33Z","content_type":null,"content_length":"22605","record_id":"<urn:uuid:c05fe0d2-34da-44ea-b9a6-6c6421d62d6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statically Indeterminate Beams
Simple Stresses Simple stresses are expressed as the ratio of the applied force divided by the resisting area or = Force / Area. It is the expression of force per...
shear deflection; Theorems of castigliano and Maxwells Reciprocal Theorem. Statically Indeterminate Beams and Frames Double integration method; superposition method...
of Materials". 10. Timoshenko : "Strength of Materials". 11. Beer, Johnson, Statics | EG104- ENGINEERING GRAPHICS LAB | Module Name | Content | | Module...
Most of the structures encountered in real-life are Statically Indeterminate.
Statically Indeterminate Beams: No. of Reactions > No. of Eqns. of Equilibrium
Degree of Static Indeterminacy = No. of Reactions in excess of the No. of Eqns of Equilibrium.
Static Redundants = excess reactions; must be selected for each particular case.
Assumption throughout this chapter is that the beams are made of Linearly Elastic Materials.
- Propped Cantilever Beam
- Fixed – End Beam
- Continuous Beam
(more than one span)
There are 4 ways of solving these types of problems.
1. Use of the deflection curve
2. Moment – Area Method
3. Superposition (Flexibility Method)
4. Indeterminate Beams Tables (handout)
We will examine No. 1 & 3, above.
1. pick redundant reaction
2. express other reactions in terms of the redundant reaction
3. write diff. eqn. of the deflection curve
4. integrate to obtain general solution
5. apply B.C. to obtain constants of integration & the redundant reaction
6. solve for the remaining reactions using equations or equilibrium
This method is useful for:
- simple loading conditions
- beams of only one span (not good method for continuous beam)
EXAMPLE No. 1
The beam shown.
Reactions at supports using the deflection curve.
MOMENT – AREA METHOD (just mention)
1. pick redundant reaction(s)
2. remove redundant reaction(s) to leave a statically determinate beam
3. apply loads on released structure
4. draw M / EI diagram for these loads
5. apply redundant reactions as loads
6. draw M / EI diagram for redundant reactions
7. apply moment – area theorems to find redundant...
Join Now to View the Full Essay
Express your owns thoughts and ideas on this essay by writing a grade and/or critique.
Sign Up or Login to your account to leave your opinion on this Essay. | {"url":"http://www.essaydepot.com/doc/34445/Statically-Indeterminate-Beams","timestamp":"2014-04-17T18:24:43Z","content_type":null,"content_length":"18240","record_id":"<urn:uuid:00187c24-d65e-4b49-87c9-987af6652534>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weymouth Calculus Tutor
Find a Weymouth Calculus Tutor
...As noted above, trigonometry is usually encountered as a part of a pre-calculus course. In my view, much of the traditional material associated with trigonometry should be replaced by an
introduction to the linear algebra of vectors, which provides alternative methods of solving many of the prob...
7 Subjects: including calculus, physics, algebra 1, algebra 2
...After college I worked as a tutor in the California Public School System. I met with six students over the course of a school year on a fixed, weekly schedule. These students were male and
female, ranging in age from elementary to middle school.
49 Subjects: including calculus, English, reading, GRE
...To complement the technical knowledge that I have accumulated and continue to gain, I have a strong background in academic communication -- both written and oral -- having presented and
published scientific research throughout my four years in high school. I have experience mentoring and tutorin...
38 Subjects: including calculus, reading, English, writing
...I want to tutor students, because I believe everyone has their own way of learning and I can cater to that. I specialize in tutoring Biology, Chemistry, Calculus, and Algebra for Middle School
and High School level courses. I aim to find a student's individual way of learning to get the most out of each tutoring experience and find the best way to delve into difficult topics.
14 Subjects: including calculus, chemistry, geometry, biology
...I am fluent in Mandarin and Cantonese. I took Chinese classes and obtained fairly good grades throughout elementary school and high school. For example, in the National College Entrance Exam, I
obtained a score in Chinese above 98% of all students in the Guangdong province.
16 Subjects: including calculus, geometry, Chinese, algebra 1 | {"url":"http://www.purplemath.com/Weymouth_Calculus_tutors.php","timestamp":"2014-04-16T19:19:14Z","content_type":null,"content_length":"23913","record_id":"<urn:uuid:d58a2968-a4b4-4401-bac9-473e1cd3a62f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equations of Parallel and Perpendicular Lines
5.4: Equations of Parallel and Perpendicular Lines
Created by: CK-12
Learning Objectives
At the end of this lesson, students will be able to:
• Determine whether lines are parallel or perpendicular.
• Write equations of perpendicular lines.
• Write equations of parallel lines.
• Investigate families of lines.
Terms introduced in this lesson:
parallel lines
perpendicular lines
family of lines
vertical shift
Teaching Strategies and Tips
Use the introduction to make observations such as:
• Parallel lines have different $y-$
• Parallel lines have the same slope. Allow students to come to this conclusion using a rise-over-run argument: If one line runs (or rises) more than another line, then the lines cannot be
parallel; they will eventually meet.
• Perpendicular lines have opposite reciprocal slopes. Encourage students to draw two perpendicular lines. Since one line will be increasing and the other decreasing, this shows that the slopes
have opposite signs. Have students also construct the rise-over-run triangles for each line. This will demonstrate that the rise of one line is the run of the other and vice-versa; the slopes are
also reciprocal.
Have students recognize in Example 5 that $y=-2$$(4,-2)$
Additional Examples:
What is the equation of a line parallel to the $x-$$(2, -1)$
Solution: A line parallel to the $x-$$y = k$$(2, -1), y = -1$
Find the equation of a line perpendicular to $x = 3$
Solution: Since $x = 3$$y = k$$x = 3$
Find the equation of a line perpendicular to $x = 3$ $(14, 15)$
Solution: $y = 15$
Use Example 9 and a graphing utility to investigate families of lines.
• Fixing the slope $m$$y-$$y = mx + b$$y-$
• Fixing the $y-$$y-$
Error Troubleshooting
You can only attach files to None which belong to you
If you would like to associate files with this None, please make a copy first. | {"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/5.4/anchor-content","timestamp":"2014-04-17T23:24:13Z","content_type":null,"content_length":"117758","record_id":"<urn:uuid:4583fcee-9354-44ba-897c-71c9ea20d56f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Infinite sums in python
up vote 2 down vote favorite
I have heard that python can do infinite sums. For instance if I want to evaluate the infinite sum:
1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ...
How should I go about? I am a newbie to python. So I would appreciate if someone could write out the entire code and if I need to include/import something.
For instance, in wolfram alpha if I input Summation (-1)^(n-1)/(2*n-1) from n=1 to infinity it gives the answer as 0.785395. I want the answer computed to a desired accuracy say, as in the case of
wolfram alpha upto 6 digits.
Further, I was looking at this post here and tried to mimic that but it gives me the following errors:
`NameError: name 'Infinity' is not defined`
`NameError: name 'Inf' is not defined`
Thanks, Adhvaitha
1 Are you expecting that you will be able to compute the limit of an infinite series? Which will often be an irrational number? Or do you want an approximation with a certain level of precision? –
Eric Wilson Sep 12 '11 at 14:05
This is a job for mathematicans or perhaps scientific software, not for a general-purpose programming language. – delnan Sep 12 '11 at 14:08
@Adhvaitha That's only works on Cray supercomputer, but this beast can execute any endless loop in 7-8 seconds! – Alexander Poluektov Sep 12 '11 at 14:09
1 For instance, in wolfram alpha if I input "Summation (-1)^(n-1)/(2*n-1) from n=1 to infinity" it gives the answer as 0.785395. I want the answer computed to a desired accuracy say, as in the case
of wolfram alpha upto 6 digits. And the downvoter, could you kindly provide the reason for the downvote. – Adhvaitha Sep 12 '11 at 14:10
3 @Adhvaitha I'm not a downvoter but I'd hazard a guess at the reason: your question, on first glance, seems to be a show me the codes type question, which some people automatically downvote. –
Lauritz V. Thaulow Sep 12 '11 at 14:21
show 4 more comments
5 Answers
active oldest votes
While it still is finite, you can approximate that series using the fractions and decimal modules:
from fractions import Fraction
from decimal import Decimal
repetitions = 100
d = 1
r = Fraction(1, d)
up vote 6 down vote
for n in range(repetitions):
r += Fraction(1, d) - Fraction(1, d + 2)
d += 4
I think this comes closest to what you want to do.
Thanks for the reminder about fractions, and for removing the comment from my answer.. :-) – Ned Batchelder Sep 12 '11 at 14:30
@Ned Batchelder: Yeah, it was overly aggresive and not thought through - I didn't even read the second part of your answer. – nightcracker Sep 12 '11 at 14:35
add comment
Python has unlimited precision integers, but not unlimited precision floats. There are packages you can use that provide that, though.
up vote 3 down And nothing can "complete" an infinite sum, since it involves an infinite number of steps. You'll need to find a closed form for the sum, and then evaluate that, or accept an
vote approximation achieved by terminating the infinite sum when a precision criterion is met.
1 For "unlimited" precision floats, use the Decimal python module. Batteries included! :) – Lauritz V. Thaulow Sep 12 '11 at 14:14
add comment
A bit of a shot in the dark here... I bet when you heard about Python being able to do infinite sums, what they meant was that in Python long integers have unlimited precision.
Clearly, this has nothing to do with summing infinite series.
I am not aware of any facet of Python that would make it particularly well suited to computing this kind of sums (or indeed establishing whether a sum is convergent).
up vote 3 down
vote You could try the direct summation of terms, with some reasonable stopping criterion. However, this will only work for well-behaved series.
Finally, just to give you some flavour of the complexity of what you're asking for, academic papers get published whose sole purpose is to deal with the summation of certain small
classes of series. The general problem that you're posing isn't as easy as it may seem.
add comment
#It may be late answer but the following works well.
repetitions = 50
r = 0.0
for i in range(repetitions):
up vote 1 down vote ii=i+1 # because in python index start from 0
print r
#the output is r=0.780398663148, you can increase the repetitions for more accuracy
add comment
For some series, such as the one shown, you can use the alternating series test to compute the sum to within a desired error. Libraries such as Decimal, GyPy, mpmath, or bigfloat, etc, can
be used if your calculation will bump into the precision of built-in floats.
Note on integer approaches:
Although the ratio-of-integers approachs seems more accurate, they are completely impractical for real calculations. The reason for this is: 1) adding fractions requires creating equal
denominators, and this basically requires multiplying the denominators, so by then end, the size of the numbers is something like n! (i.e., the factorial); and, 2) for the example series, a
precision of m digits requires m terms. Therefore, even for only six digit accuracy, one requires numbers roughly equal to 1000000! = 8×10^5,565,708. For bigger numbers, it's roughly 10^10^
n, which quickly becomes completely impractical. Meanwhile, a decimal solution calculated to 6 or 7 or even 40 digits is trivial.
For example, running nightcrackers solution, the times and numbers of digits in the denominator or numerator I get are:
up vote
0 down n t n_digits_in_denominator
vote 10 0.0003 14
100 0.0167 170
1000 5.5027 1727
10000 ???? ???? (gave up after waiting one hour)
And this becomes impractical for only ~4 digits of accuracy.
So if you want to exactly calculate a finite and small number of terms and express the final result as a ratio, then the integer solutions would be a good choice, but if you want to express
the final result as a decimal, you'd be better off just sticking with decimals.
add comment
Not the answer you're looking for? Browse other questions tagged python or ask your own question. | {"url":"http://stackoverflow.com/questions/7389109/infinite-sums-in-python","timestamp":"2014-04-18T04:18:57Z","content_type":null,"content_length":"92052","record_id":"<urn:uuid:b3ca5454-3769-437a-afe9-977d4c324c53>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pontryagin Product on the Homology of $CP^{\infty}$
up vote 0 down vote favorite
Is there an explicit description of the Pontryagin product on the homology of $CP^{\infty}$? Also, what is the homology of the classifying spaces $BU(n)$?
at.algebraic-topology reference-request classifying-spaces
Look it up in "Topology of Lie groups I, II" by Mimura and Toda. – Fernando Muro Apr 14 '13 at 10:01
The Hopf algebra structure on homology is dual to cohomology. One often uses that the diagonal of a generator in cohomology is easy to work out (for degree reasons) and that it is an algebra map.
2 The Hopf algebra structure on $H^{*}(CP^{\infty};R)$ is given by $R[[x]]$ where x is primitive and so the dual Hopf algebra structure on homology will be a divided power algebra where the element
dual to $x$ is primitive (and this determines the rest of the structure). One can think of this as the algebra with generators $\frac{1}{k!}\frac{d^{k}}{dx^{k}}$ and the coproduct as given by the
Leibniz rule – Callan McGill Apr 14 '13 at 10:30
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged at.algebraic-topology reference-request classifying-spaces or ask your own question. | {"url":"http://mathoverflow.net/questions/127508/pontryagin-product-on-the-homology-of-cp-infty","timestamp":"2014-04-18T00:59:30Z","content_type":null,"content_length":"48052","record_id":"<urn:uuid:b5e35b5b-8da6-41b3-b8ee-c0fe2f455494>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
::Forum - city revenue equation question - Star Wars Combine::
Year 7
Day 152 city revenue equation question
Zao Ok, I'm working on a spread sheet that will let my faction play with the #s and such to optimize a city for revenue. Here is the thing, I have a question on one
Nephalem equation, the crime rate equation. It is written here as: CR = CL x (Sum of (Facility [A] x surface [A]) ) / (Sum of (Facility [B] x surface [B] ) ) x eTL / 0.83.
Now, I'm using Excel, just wondering how I would put this in. I get drastic differences in crime rates and income when I enter it in different ways. I have two
extream ways of entering it in, just wondering which one is correct:
just straight with
CL x sum of A / sum of B x (eTL/0.83)
or grouped as
(CL x sum of A)/(sum of B x(eTL/0.83))
which is correct using excel?
Reek Havoc and let loose the Hogs of War
Year 7
Day 152 city revenue equation question
Ranma Best is to send an email directly to the simmaster@swcombine.com
SWC Fanpage --\\-- SWC Twitter
Year 7
Day 152 city revenue equation question
Arc Hey Ramma, when you get a reply, could you post the explanation here as well? I'm working on a similar spreadsheet, and was wondering the same thing.
Year 7
Day 152 city revenue equation question
Ranma I was suggesting that Zao email veynom, not me silly.
SWC Fanpage --\\-- SWC Twitter
Year 7
Day 152 city revenue equation question
Arc Meh, my bad. I knew that, but something was lost between my brain and my fingers. Ah well, it's time to sleep.
Year 7
Day 152 city revenue equation question
Hal The crime rate (IIRC) is actually calculated as (old crime rate) + (crime rate modifier of new facility). So the value is modified each time by construction, and NOT
Breden recalculated.
So good luck trying to calculate it in advance.
"May the Grace of Ara go with you, and His Vengeance be wrought upon your enemies."
Only fools and children dream of heroes.
Year 7
Day 152 city revenue equation question
Zao Ok, I'll e-mail simmaster@swcombine.com with this, and i'll post the reply here. There shouldn't be an old crime rate though, that is if you are building on planets
Nephalem with no population. At least that is what we are primarily doing anyways.
Reek Havoc and let loose the Hogs of War
Year 7
Day 153 city revenue equation question
Imazushi Um, Rand Axim has something somewhat like that, just for building. Maybe he can help you out.
Year 7
Day 153 city revenue equation question
Zao Ok guys, the official word from veynom when using excel is that the first one is correct:
Nephalem CL x sum of A / sum of B x (eTL/0.83)
Where it is a X function, / function, X function, then x grouped function, where only grouping the division function at the end (eTL/0.83) is one variable.
This does make more sense, ... using the other equation, I could tweek the building of a city to turn out 1.3 billion a month, ... if there were not building
Reek Havoc and let loose the Hogs of War
Year 7
Day 153 city revenue equation question
Zao wait, wait, I said that wrong, .... they are all state functions, group the last one togeather as a variable (eTL/0.83). That is just a better way of saying it.
Reek Havoc and let loose the Hogs of War | {"url":"http://www.swcombine.com/forum/thread.php?thread=7197&page=0","timestamp":"2014-04-18T10:39:52Z","content_type":null,"content_length":"32115","record_id":"<urn:uuid:227d0833-0f9e-4a5e-9efa-a3399a71b322>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
New W3C Note: "Arabic mathematical notation"
From: Bert Bos <bert@w3.org> Date: Wed, 1 Feb 2006 12:40:46 +0100 To: www-math@w3.org Message-Id: <200602011240.46929.bert@w3.org>
The Math IG just published a Note:
Title: Arabic mathematical notation
URL: http://www.w3.org/TR/2006/NOTE-arabic-math-20060131/
Editors: Azzeddine Lazrek, Mustapha Eddahibi, Khalid Sami, Bruce R.
Abstract: This Note analyzes potential problems with the use of MathML
for the presentation of mathematics in the notations
customarily used with Arabic, and related languages. The goal
is to clarify avoidable implementation details that hinder
such presentation, as well as to uncover genuine limitations
in the specification. These limitations in the MathML
specification may require extensions in future versions of the
Comments are welcome, of course. Please send them to this list.
Bert Bos ( W 3 C ) http://www.w3.org/
http://www.w3.org/people/bos W3C/ERCIM
bert@w3.org 2004 Rt des Lucioles / BP 93
+33 (0)4 92 38 76 92 06902 Sophia Antipolis Cedex, France
Received on Wednesday, 1 February 2006 11:40:52 UTC
This archive was generated by hypermail 2.3.1 : Wednesday, 5 February 2014 07:13:40 UTC | {"url":"http://lists.w3.org/Archives/Public/www-math/2006Feb/0001.html","timestamp":"2014-04-19T12:26:51Z","content_type":null,"content_length":"8542","record_id":"<urn:uuid:ebfb3f6a-5cc7-4c6e-864f-96e1723a3d0f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
No. 2427: Arrow’s Paradox
Today, we vote. The University of Houston’s College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them.
Political scientists have long been aware that there are problems with voting systems; problems so deep and so fundamental, they leave us scratching our heads and asking what's going on.
Trouble first surfaced during the Enlightenment, as Jean-Charles de Borda and the Marquis de Condorcet debated the merits of different voting schemes. But it was not until 1951 that Nobel
laureate Kenneth Arrow fully laid bare a problem that Borda and Condorcet had been struggling with.
Borda advocated letting people rank each candidate with a number, adding the points, and choosing the candidate with the best total score. We could view the method of voting we use today as a
special case of Borda's method — where our favorite candidate receives one point and everyone else receives none.
Condorcet, on the other hand, advocated a vote between every pair of candidates. The candidate that wins in every comparison is elected. The practical problem with Condorcet's method is that it
may fail to produce a winner. We see this all the time in athletic competitions. The Astros beat the Reds, the Reds beat the Cubs, and the Cubs beat the Astros. Who's the winner? In voting, this
is known as Condorcet's Paradox.
But there's a hidden problem with Borda's method of numerical ranking, too. Imagine we can get chocolate or vanilla ice cream for our picnic group. We cast votes, and chocolate wins. Now suppose
someone suggests strawberry as an option. We add it to the list and vote again. Even though we all feel the same way about chocolate and vanilla, we may find vanilla now wins. Seems silly. But
it’s a very real problem in U.S. elections, and the democratic and republican parties constantly worry about candidates from third parties claiming votes.
One of the most famous examples occurred in the 2000 presidential election where George W. Bush narrowly defeated Al Gore. That same year, Ralph Nader won close to three percent of the popular
vote. Political scientists believe that had Nader not been on the ticket, most of his votes would have gone to Gore, changing the outcome of the election.
We might ask if there’s a voting system — any system at all — that doesn't threaten to flip-flop two candidates when a third candidate enters the race. Remarkably, Arrow proved that for any
system meeting the most basic standards of common sense, the answer is No.
The implications for voting are stunning. But the impact of Arrow's work on economics and social choice goes far deeper. If we can't reasonably combine individual preferences, how can we develop
economic or social policies then claim they represent what society prefers? In a real sense, Arrow used mathematics to show that we can’t; that instead, rhetoric, gamesmanship, and back-room
deals must necessarily be part of the political process.
I’m Andy Boyd at the University of Houston, where we’re interested in the way inventive minds work.
(Theme music)
Arrow presented five postulates that any "sensible" or "fair" voting system should satisfy. He then mathematically proved that these postulates were mutually contradictory — no voting system
could satisfy all five.
For brevity, we've focused on the most famous postulate, the independence of irrelevant alternatives, which loosely states that when candidate A is preferred to B, then A should still be
preferred to B if other candidates enter or leave the election. As basic as this may seem, the other postulates are even more so, and are therefore simply referred to as "basic standards of
A simple, brief description of Arrow's postulates and links to a proof can be found at: http://www.websters-online-dictionary.org/definition/ARROW%2527S+PARADOX.
A more detailed description, including an outline of a proof, can be found at http://en.wikipedia.org/wiki/General_Possibility_Theorem.
This is a substantially revised version of Episode 1921
The Engines of Our Ingenuity is Copyright © 1988-2008 by John H. Lienhard. | {"url":"http://www.uh.edu/engines/epi2427.htm","timestamp":"2014-04-19T14:39:18Z","content_type":null,"content_length":"7244","record_id":"<urn:uuid:7cb5c26c-c1e2-4934-b299-078f655a2338>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring Omega - A. Dekel et al.
3.3. Cosmic Virial Theorem
In the cosmic virial theorem (CVT), clustering is assumed to be statistically stable on scales h^-1Mpc, such that the ensemble-average contribution of a third mass particle to the relative
acceleration of a pair of galaxies is balanced by the relative motions of the ensemble of pairs. The kinetic energy is represented by the pairwise relative velocity dispersion of galaxies [[12](r)],
and the potential energy involves a combination of integrals of the 2- and 3-point galaxy correlation functions [26]. A very crude approximation to the actual relation is
where r) is the two-point correlation function. In some ways this is like stacking many groups of galaxies to get a mean M/L, so it can be regarded as a statistical version of the mass-to-light ratio
On scales 1 - 10 h^-1Mpc the more useful statistics for dynamical mass estimates is the mean (first moment) pairwise velocity in comparison with an integral of the galaxy two-point correlation
In the approach of the Layzer-Irvine (LI) equation, the kinetic energy is associated with the absolute rms velocity (in the CMB frame) of individual galaxies and the potential energy is an integral
over the 2-point correlation function.
New developments: It has been realized that the pair velocity dispersion is an unstable statistic that is dominated by galaxies in clusters. Attempts are being made to apply the method while
excluding clusters.
Pro: There is no need to associate galaxies with separate groups and clusters; it is all statistical.
Con: The relevant correlation integrals, and in particular the 3-point correlation function that enters the CVT, are very difficult to measure. [m] low. The overlap of extended galactic halos makes a
significant difference. M/L method). Otherwise, they must refer to a specific model for galaxy biasing.
Current Results: The line-of-sight pair velocity dispersion outside of clusters is in the range [f] km s^-1 [27] [28]. With Peebles' old estimate of the correlation functions, assuming point masses
and no biasing, he obtains from the CVT [m] 6]. 27]: [m] ~ 0.25. [m] 29]. | {"url":"http://ned.ipac.caltech.edu/level5/Dekel3/Dek3_3.html","timestamp":"2014-04-17T10:31:51Z","content_type":null,"content_length":"6088","record_id":"<urn:uuid:809f2f14-5926-44f8-80cd-268565ad6d8b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
\[\lim_{h \rightarrow 0}\frac{ \frac{ 4 }{ 2+2h }-2 }{ h }\]
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
limit approaches -2
Best Response
You've already chosen the best response.
no it approaches 0
Best Response
You've already chosen the best response.
maybe reducing it all to just one fraction might help
Best Response
You've already chosen the best response.
|dw:1350449555332:dw| i know im doing something wrong
Best Response
You've already chosen the best response.
find a CD in the numerator .
Best Response
You've already chosen the best response.
\[\huge \frac{ \frac{ 4-2(2+2h) }{ 2+2h } }{h }\]
Best Response
You've already chosen the best response.
\[\huge \lim_{h \rightarrow 0}\frac{\frac{4}{2+2h}-2}{h}\] is the same as \[\huge \lim_{h \rightarrow 0}\frac{\frac{4}{2+2h}-\frac{2(2+2h)}{2+2h}}{h}\] and further... \[\huge \lim_{h \rightarrow
Best Response
You've already chosen the best response.
Now reduce it to one fraction, and do any cancellations, if possible
Best Response
You've already chosen the best response.
why is it \[\frac{ 4x }{ 2+2h }\]
Best Response
You've already chosen the best response.
its supposed to be (2+h) *
Best Response
You've already chosen the best response.
Looked at your first post... Ok, then \[\huge \lim_{h \rightarrow 0}\frac{\frac{4}{2+h}-2}{h}\]\[\huge \lim_{h \rightarrow 0}\frac{\frac{4}{2+h}-\frac{2(2+h)}{2+h}}{h}\] \[\huge \lim_{h \
rightarrow 0}\frac{\frac{4}{2+h}-\frac{4+2h}{2+h}}{h}\]
Best Response
You've already chosen the best response.
yup limit approaches to -2 as x approaches to zero
Best Response
You've already chosen the best response.
@mark_o. You might want to reconsider..., given that there was a correction; it's 2+h, not 2+2h as was in the first post.
Best Response
You've already chosen the best response.
sorry typing error :$
Best Response
You've already chosen the best response.
|dw:1350450481516:dw| why do i get this :S this doesnt make sense because that would mean the answer is zero
Best Response
You've already chosen the best response.
like what am i doing wrong?
Best Response
You've already chosen the best response.
You were to multiply \[\frac{2+h}{2+h}\] to the -2 in the numerator; not to the entire fraction.
Best Response
You've already chosen the best response.
that among other things... But yeah, multiply the \[\frac{2+h}{2+h}\] only to the -2, and redo your work...
Best Response
You've already chosen the best response.
but that doesnt get rid of the fraction
Best Response
You've already chosen the best response.
hmm @terenzreignz, i didnt see the first posting of 2+h, i only saw 2+2h,, ththnx for reminding though...:D
Best Response
You've already chosen the best response.
We're not to get rid of it, only to make it simpler, ie, only one fraction bar.
Best Response
You've already chosen the best response.
|dw:1350450882525:dw| like that ?
Best Response
You've already chosen the best response.
That's right. Please continue.... :)
Best Response
You've already chosen the best response.
cross multiply?
Best Response
You've already chosen the best response.
nvm doesnt matter
Best Response
You've already chosen the best response.
Just distribute the 2.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok... I think I know where you're getting at... Be careful, the way you wrote it is full of... difficulties. On top, you should have 4 - (4 + 2h) Groupings can make a world of difference, so you
better specify which operations are prioritised...
Best Response
You've already chosen the best response.
Not to mention you may have ignored and/or forgotten that there is another h in the denominator. Careful...
Best Response
You've already chosen the best response.
ohh alright !
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507e3654e4b0599919841a52","timestamp":"2014-04-20T23:50:02Z","content_type":null,"content_length":"247469","record_id":"<urn:uuid:6a0311c9-d449-484c-b223-db9ec3cbadb4>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
SVG 1.1 2nd Edition Test (<object>): animate-elem-67-t.svg
Operator script (click here to toggle)
Pass Criteria
The test passes if for the first five seconds after the document loads, the red squares in each row (two in the third row, and one each in the remaining rows) are in the right column, and after the
five seconds, they all move to the left column. | {"url":"http://www.w3.org/Graphics/SVG/Test/20110816/harness/htmlObjectMini/animate-elem-67-t.html","timestamp":"2014-04-20T21:35:46Z","content_type":null,"content_length":"3318","record_id":"<urn:uuid:f3449b83-dce0-4587-9a32-fd6b59b99cb1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
VIX Fair Value Model
I created a model which comes up with the fair value of the near-tem+6 VIX value.The CBOE website has the historical VIX term structure.As I mentioned before there is a strong relationship between
high yield OAS and VIX.Here is the model vs. the actual near-term+6 VIX:
So, what is this model telling us today?
As you can see this is the Dec VIX fair value, the VIX spot and the near term is a lot more volatile.
I'm working on a model to come up with a fair value for the shorter term VIX as well.
Let me know you thoughts.
7 comments:
1. Hey,
Great blog, congratulations!
Where did you get your OAS data, is it freely available somewhere?
Also, what exactly do you mean by "near-term+6"?
Thanks, and apologies if these questions don't make any sense.
2. By near term+6 I meant the 6th expiration. For example, today that would be 12/18/2010. You can find more info about the VIX term structure on CBOE website. Send me an e-mail to
volatilitysquare@gmail.com if you want the OAS data.
3. So it's the VIX futures 6 months out? Great, thanks very much!
4. Sorry, I meant 6 contracts out. Great blog by the way, very informative and thought provoking.
5. Robert,
did you decide to drop this blog? Why? I thought you were writing some good things. This VIX model is interesting, I would like to discuss, give me a ring.
6. Have you heard about spurious regressions
7. i trade vix futures exclusively. if you could share some fair val modeling i am certain i can add equal insights in return. | {"url":"http://volatilitysquare.blogspot.com/2010/02/vix-fair-value-model.html","timestamp":"2014-04-19T17:04:44Z","content_type":null,"content_length":"94597","record_id":"<urn:uuid:f2b8781e-9cda-4663-b41a-9f5f2da304e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Empirical Bayes Estimation of On Base Percentage
December 30, 2010
By Andrew Landgraf
I guess you could call this On Bayes Percentage. *cough*
Fresh off learning Bayesian techniques in one of my classes last quarter, I thought it would be fun to try to apply the method. I was able to find some examples of
Hierarchical Bayes
being used to analyze baseball data at
Setting up the problem
On base percentage (OBP) is probably the most important basic offensive statistic in baseball. Getting a reliable estimate of a players true ability to get on base is therefore important. The basic
problem is that the sample size we get from one season rarely has enough observations so that we are certain of a player's ability. Even though there are 162 games in a season, there is a possibility
that the actual OBP is the result of luck rather than skill. Bayesian analysis will "regress" the actual observed OBP to the mean, in that if a player has a small number of plate appearances (PA) it
doesn't give them very much weight and the result will be something closer to the overall (MLB) average. On the other hand, if a player has quite a few PAs then it believes that the results are not
the result of luck and it gives the observations a lot of weight.
We are trying to estimate the "true" OBP of each batter. Bayesian analysis assumes that the true OBP is random.
Empirical Bayes
is a method of figuring out the distribution of "true" OBP using the data. OBP is times on base divided by PA. Times on base (X) for each batter is distributed
with n=PA and p=true OBP. We further assume that p is distributed
with parameters a and b. It follows from this that the marginal distribution of X is distributed according to the distribution:
gamma(a+b)*gamma(a+x)*gamma(n-x+b)*(n choose x)/(gamma(a)*gamma(b)*gamma(a+b+n))
where gamma is the
gamma function
We will estimate the parameters a and b based on the data (X), using its marginal distribution (the "empirical" part of Bayes). To do this I found that likelihood of the marginal distribution of all
the batters. Then I maximized this likelihood by adjusting the parameters a and b. This is called the ML-II.
The Analysis
I used data for all non-pitchers in 2010. I assume that each player is independent. In doing that, I just have to multiply all the marginals for each player together to get the likelihood. When I do
this and maximize it with respect to a and b, I get estimates that a = 83.48291 and b = 174.9038. I think this can be interpreted that prior mean (what we would assume that average OBP of a batter is
before seeing him bat) is a/(a+b) = 0.323. This is pretty close to what the overall OBP of the league was (0.330). I think it makes sense that the prior is lower than the league average because
batters who do well will get more opportunities and players that do poorly will get fewer. So the league average is biased high.
Below is a graph of the prior distribution and the updated posteriors of every batter. You can (sort of) see that the posteriors have tighter distributions than the prior does. (The posterior
distribution of each batter in this case is the distribution of OBP after we have observed PA and the actual OBP.)
One way to see why this Bayesian analysis is useful is to compare the posterior means with the observed OBP. If someone has only a few PAs, their OBP could be very high or very low and this may
mislead you into thinking that this batter is very good or bad. However, the posterior mean takes into account the number of PAs. Below is a graph comparing the two. You can see that the range of
values for posterior mean is pretty small, especially compare to actual OBP.
Here is a list of the highest posterior mean OBP:
Batter Posterior Mean Actual OBP
Joey Votto 0.396 0.424
Miguel Cabrera 0.392 0.420
Albert Pujols 0.390 0.414
Justin Morneau 0.388 0.437
Josh Hamilton 0.383 0.411
Prince Fielder 0.380 0.401
Shin-Soo Choo 0.379 0.401
Kevin Youkilis 0.379 0.412
Joe Mauer 0.378 0.402
Adrian Gonzalez 0.374 0.393
Daric Barton 0.374 0.393
Jim Thome 0.373 0.412
Paul Konerko 0.373 0.393
Jason Heyward 0.373 0.393
Matt Holliday 0.371 0.390
Carlos Ruiz 0.371 0.400
Manny Ramirez 0.371 0.409
Billy Butler 0.370 0.388
Jayson Werth 0.370 0.388
Ryan Zimmerman 0.369 0.388
And here is a list of the lowest posterior mean OBP:
Batter Posterior Mean Actual OBP
Brandon Wood 0.252 0.175
Pedro Feliz 0.271 0.240
Jeff Mathis 0.276 0.219
Garret Anderson 0.277 0.204
Adam Moore 0.281 0.230
Josh Bell 0.285 0.224
Jose Lopez 0.286 0.270
Peter Bourjos 0.287 0.237
Aaron Hill 0.287 0.271
Tony Abreu 0.288 0.244
Koyie Hill 0.291 0.254
Gerald Laird 0.291 0.263
Drew Butera 0.291 0.237
Jeff Clement 0.291 0.237
Matt Carson 0.291 0.193
Humberto Quintero 0.292 0.262
Wil Nieves 0.292 0.244
Matt Tuiasosopo 0.292 0.234
Luis Montanez 0.292 0.155
Cesar Izturis 0.292 0.277
You can see that all of the posterior means are pulled closer to the overall mean (the good players look worse and the bad players look better). The order changes a little bit but not too much.
You can see the effect of sample size (PAs) by comparing Justin Morneau with Joey Votto. Morneau had a higher OBP, but Votto ended up with a higher posterior mean because he had more PAs (Votto had
648 while Morneau had 348). Here are their posterior distributions:
Because of the additional PAs, you can see that the distribution of Votto is a little tighter than Morneau. We are more sure that Votto is excellent than we are sure that Morneau is excellent.
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/empirical-bayes-estimation-of-on-base-percentage/","timestamp":"2014-04-17T01:21:55Z","content_type":null,"content_length":"53640","record_id":"<urn:uuid:b4930fe6-84a5-4e3c-a59a-e6c88c2d99ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probing the equation of state with pions
The influence of the isospin-independent, isospin- and momentum-dependent equation of state (EoS), as well as the Coulomb interaction on the pion production in intermediate energy heavy ion collisions (HICs) is studied f
The influence of the isospin-independent, isospin- and momentum-dependent equation of state (EoS), as well as the Coulomb interaction on the pion production in intermediate energy heavy ion collisions (HICs) is studied for both isospin-symmetric and neutron-rich systems. The Coulomb interaction plays an important role in the reaction dynamics, and strongly influences the rapidity and transverse momentum distributions of charged pions. It even leads to the pi- pi+ ratio deviating slightly from unity for isospin-symmetric systems. The Coulomb interaction between mesons and baryons is also crucial for reproducing the proper pion flow since it changes the behavior of the directed and the elliptic flow components of pions visibly. The EoS can be better investigated in neutron-rich system if multiple probes are measured simultaneously. For example, the rapidity and the transverse momentum distributions of the charged pions, the pi- pi+ ratio, the various pion flow components, as well as the difference of pi+-pi- flows. A new sensitive observable is proposed to probe the symmetry potential energy at high densities, namely the transverse momentum distribution of the elliptic flow difference [Delta v_2^pi+ - pi-(p_t rm c.m.].
Author: Qingfeng Li, Zhuxia Li, Sven Soff, Marcus Bleicher, Horst Stöcker
URN: urn:nbn:de:hebis:30-19786
Document Type: Preprint
Language: English
Date of Publication (online): 2005/10/27
Year of first Publication: 2005
Publishing Institution: Univ.-Bibliothek Frankfurt am Main
Release Date: 2005/10/27
HeBIS PPN: nucl-th/0509070
HeBIS PPN: 190459204
Institutes: Physik
Frankfurt Institute for Advanced Studies (FIAS)
Dewey Decimal Classification: 530 Physik
Sammlungen: Universitätspublikationen
Licence (German): Veröffentlichungsvertrag für Publikationen | {"url":"http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/3419","timestamp":"2014-04-18T05:56:11Z","content_type":null,"content_length":"16783","record_id":"<urn:uuid:0b3d54c7-0cde-40cf-bd2b-5428a2c8f962>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Which of the following equations represents a line which is perpendicular to the line that passes through the points below? (-2,-6) (0,-2) y=-2x+2 y=1/2x +2 y=-1/2x +2 y=2x+2
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f0ce63ce4b084a815fca221","timestamp":"2014-04-20T08:35:01Z","content_type":null,"content_length":"34750","record_id":"<urn:uuid:14f4ba22-26a0-41c0-8ef0-0cf2253577fa>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
New methods for simulating the Student T-distribution - Direct use of the inverse cumulative distribution function
Shaw, William T. (2005) New methods for simulating the Student T-distribution - Direct use of the inverse cumulative distribution function. Submitted to Journal of Computational Finance. (Submitted)
We explore the Student T-Distribution and present some new techniques for simulation. In particular, an explicit and accurate approximation for the inverse,
Item Type: Other
Uncontrolled Keywords: Student, Distribution, T-distribution, Simulation, Monte Carlo, Inverse Cumulative Distribution Function, Inverse CDF.
Subjects: O - Z > Statistics
A - C > Approximations and expansions
O - Z > Operations research, mathematical programming
D - G > Game theory, mathematical finance, economics, social and behavioral sciences
A - C > Computer science
O - Z > Probability theory and stochastic processes
Research Groups: Mathematical and Computational Finance Group
ID Code: 184
Deposited By: William Shaw
Deposited On: 09 Aug 2005
Last Modified: 20 Jul 2009 14:19
Repository Staff Only: item control page | {"url":"http://eprints.maths.ox.ac.uk/184/","timestamp":"2014-04-20T05:55:22Z","content_type":null,"content_length":"14428","record_id":"<urn:uuid:6a5a3b56-2f48-4b16-b66e-5ba28a9dfeb4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Total # Posts: 331
Am going correct with this one. It is estimated that the Earth is losing 4000 species of plants and animals every year. If S represents the number of species living last year, how many species are on
Earth this year? S-4000=1 S=1+4000 S=4001 of species we have on this earth th...
Am going correct with this one. It is estimated that the Earth is losing 4000 species of plants and animals every year. If S represents the number of species living last year, how many species are on
Earth this year? S-4000=1 S=1+4000 S=4001 of species we have on this earth th...
Can you check my previous postage I'd reply already. Thank you done below What are about the other messages are they okay too. I think I just finished with all of them.
Is this correct Business and finance. The simples interest I on a principle of P dollars at interest rate r for time t, in years, is given by I= Prt. Find the simple interest on a principal of $6000
at 3% for 2 years. Simple Interest = Principle x Rate x Time (in years)
What kind of expression is good to use for this problem so i can solve it. The kinetic energy of particle of mass m is founded by taking one-half of the product of the mass (m) and the square of the
velocity (v). Write an expression for the kinetic energy of a particle. The pr...
How do I even write an equation for this problem all i need someone to show me how to get an equation so i can solve it. It is estimated that the Earth is losing 4000 species of plants and animals
every year. If S represents the number of species living last year, how many spe...
math, algebra
Can someone correct these for me Directions: Write each of the following phrases, using symbols. (1)The sum of m and n, divided by the difference of m and n.
why is x+2 different from 2x? My aswer: doesn't both of them mean the same thing. If x was 5 x+2=7 2x=10
Can somone check this for me. Directions: Identify the property that is used. (1) 5+(6+7)=5+(7+6) My answer: The commutative property of addition (2) 4 x(3+2)=(3+2) x 4 My answer. The communtaive
property of multiplication right on both.
(1) Why is x+2 from 2x? (2)Use the distributive property to remove the parentheses in each expression. Then simplify by combining like terms. ORIGINAL PROBLEM : 7(4W-3)-25 My answer: Is this answer
correct 7(TIMES) 4W + 7 (TIMES)-3-25 7 (TIMES)4W -21-25 28W-21-25 3W-21
effects on religion, writeacher
How does this sound.I've included what jen said to include. Also how can i site what she told me to put or talk about. Religion has encounter greatly throughout the world since the beginning and
still now although not as much. Some negative attributes that religion has cau...
Relion effects , Jen
Its asking me to do two separate things one: List first effect thesis and provide examples. THe second one is :List second effect thesis with examples. THe effects I have to think of positive as well
as negative how should I word it. So you have to choose one positive and one ...
Effects on religion, Jen
Which ones are the positive effects that organized religion has had on social groups in the past as well as in the more recently. Which ones are the negative effects that organized religion has had
on social groups in the past as well as in the more recently. You said the foll...
Effects of religon
What are some effects that organized religion has had on society and what are some examples from past and/or present world events that illustrates the examples? religion has caused countless wars in
world history. This includes: The Crusades, The 30 years War, The French Wars ...
math, algebra
How is 28 a perfect number? Can someone explain this to me. all i understand that all perfect numbers end in 6 and 8. But how and why is 28 a perfect number. dimes are packaged 50 in a roll. quarters
come 40 in a roll.susan b anthony dollars come in rolls of 25. how many rolls...
can someone correct this for me. Directions: Insert grouping symbols in the proper place so that the given value of hte expression is obtained. 5-3* 2 + 8 * 5 -2 ; 28 My answer: 5-3*2+8* (5-2) (5-3)
*2 + 8*(5-2) = 2*2 + 8*3 = 4 + 24 = 28 Thank you that makes more sense
math, Jay
How do these fractions turn into the following fractions. 1/3=10/30 1/5=6/30 1/10=3/30 There is a fundamental fact of math that says if you multiply anything by one , nothing changes. but 1 = 3/3 =
10/10 etc. So if you have a fraction, and multiply it be the same numerator AND...
Is this the correct way to solve these math problems and are the answers correct. Directions:Solve the following applications. Science. Jose rode his trail bike for 10miles. Two-thirds of hte
distance was over a mountain trail. How long is tehe mountain trail? 2/3 (x)10= 6 and...
how do you solve this type of math problems that indicates to perform the indicated operations for 3 and 3/4 divided by 1 and 3/8 convert the whole numbers into fractions. 3 3/4 = 15/4 1 3/8 = 11/8
to divide 15/4 by 11/8 invert the denominator and multiply. so 15/4 x 8/11. Cro...
Is there anywher i can go for further explanations on how to solve for the following statements on how to solve them: Dividing writing each result in simplest form & adding & subtracting I searched
Google under the key words "fractions divide add subtract" to get thi...
How do you write fractions in simplest form? for example I need explanation 48/66 Also I started on another one it states that I have to multiply. and I have to be sure to simplify each product 7/9 x
3/5 My answer: 21/45 I just don't know what goes after this step? to simp...
Let me know if this is correct now: (1)Finding the lcm using which ever method for:5,15 , and 20 8:2x2x2x3x5x5 15:2x2x2x3x5x5 20:2x2x2x3x5x5 --------------- LCM:2x2x2x3x5x5 After simplification
LCM:120 (2)Find the GCF for each of the following numbers: 36, 64, 180 Prime Factor...
math, algebra
Can someone check this for me : Find the gcf for each of hte following groups of numbers: 36,64, and 180 My answer: gcf=4 Find the lcm for each of the following groups of numbers which ever method
you wish:8,15, and 20 My answer: Lcm=1 Find two factors of 15 with a difference ...
History, Religions of the world
I need someone to explain to me how can religon affect people in many different ways. My understanding towards this is that all religons are different because they beleive in different things. So
many this can cause arguments between any two individuals who are from two differ...
I'm suppost to write a job-application letter to a company of interest attaching the resume. My question for this is on the job application letter on the heading i'm suppost to omitt my address and
on the resume as well. And also what type of format am i suppost to use...
writeacher, english
for a job application letter . the format for my information where my name goes on top my address should be ommitted and just keep the email address etc. And attach the resume. please help I still
seem not to see the the templates. What is the difference of an online resume an...
What is the difference between summiting a hard copy resume and an online resume? Where can I view example of both. How does a job-application letter to a company look like? http://
owl.english.purdue.edu/handouts/pw/index.html At this website, you'll find directions and ex...
drwls, math
Did you mean the following for the problem: 10) perform the indicated operations 3 and 3/4 (divided by ) 1 and 3/8 . 3 3/4 - 1 3/8 . 15/4 (divided by ) 11/8= 30/11 =(after rounding it should be 2.73)
thats how it suppost to be and look Yes, that is what I meant. You never said...
I don't know how to even start to work on this problem what should i consider. Science.Jose rode his trail bike fro 10 miles. two-thirds of hte distance was over a mountain trail.How long is the
mountain trail. How should i write the equation in order to solve it. I want t...
drawls, math
Did you mean the following for the problem: 10) perform the indicated operations 3 and 3/4 (divided by ) 1 and 3/8 . 3 3/4 - 1 3/8 . 15/4 (divided by ) 11/8= 30/11 =(after rounding it should be 2.73)
thats how it suppost to be and look You use the inverse of the divisor, and t...
I'm sorry the problems i posted had a couple of errors how i posted it here they are: Posted by DrBob222 on Friday, November 24, 2006 at 6:19pm in response to Math. Thanks for showing your work. It
helps us to know where you are having trouble. #4. [25/ (2^3-3)]*2 the star...
Pages: <<Prev | 1 | 2 | 3 | 4 | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jasmine20&page=4","timestamp":"2014-04-16T05:23:28Z","content_type":null,"content_length":"16563","record_id":"<urn:uuid:d006bd67-400c-4332-99e7-3f13ad88eafd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
What to do?
After rolling the first ball of a frame in a game 10-pin bollowing, how many pin configurations are physically possible?
Me and dragon are friends, and he didnt post the correct problem. It should read: After rolling the first ball of a frame in a game of 10 pin bowling, how many different pin configurations can remain
(assuming all configurations are physically possible)? Assuming ThePerfectHacker's solution is still correct, could somebody explain it a little more clearly? (I'd make a new thread but I'm not sure
if it violates the identical thread rule.)
Try this: Draw the diagram of the pins: Code: o o o o o o o o o oNow knock over a pin: Code: o o o o o o o x o oThat's 1 combination, now put that pin back up and knock over another pin: Code: o o x
o o o o o o oThat's another combination, since you can knock over ten individual pins there are 10 physical combinations of pins when 1 is knocked over... Now what if 2 were knocked over? Well first
thing to do is to draw the the pins with 1 knocked over: Code: o o o o o o o o o xNow, knock another one over: Code: o o o o o o o x o xPut it back up and knock another one over: Code: o o o o o o x
o o xIf you keep doing this you'll realize that there are nine combinations.. Thus you have 9 combinations per pin, since you have 10 pins you have 9(10) combinations for the whole thing, which is 90
However every one of those combinations is counted twice, notice that if I knock down a pin: Code: o o o o o x o o o oand knocked down another pin: Code: o o o o o x o x o oit is the same combination
as if I knocked down this pin: Code: o o o o o o o x o oand then knocked down the second pin: Code: o o o o o x o x o oSo the number of combinations for 2 pins is 90/2, which equals 45 So far you
have 10+45=55 combinations more to come (going to do it in another post)...
o o o o o o o o o o
o o o o o o o x o o
o o x o o o o o o o
o o o o o o o o o x
o o o o o o o x o x
o o o o o o x o o x
o o o o o x o o o o
o o o o o x o x o o
Now onto 3 pins, knock over 2 pins: Code: o o o o o o o x o xNow knock over another one: Code: o o o x o o o x o xReset and do it again: Code: o x o o o o o x o xYou might realize that there are 8
possible combinations. Thus the number of combinations for three pins is the number of combinations for 2 pins times 8, which is 45*8=360 But wait! each combination is counted three times! so you
divide the number by 3, thus you have 360/3=120 combinations Add that to the amount of combinations you have so far to get: 10+45+120=175 Now you try doing the amount of combinations for 4, 5, 6, 7,
8, 9, and 10 pins, and solve from there!
o o o x o o o x o x
o x o o o o o x o x | {"url":"http://mathhelpforum.com/algebra/6764-what-do-print.html","timestamp":"2014-04-20T07:15:58Z","content_type":null,"content_length":"12549","record_id":"<urn:uuid:b3e43c42-8cca-4630-9469-b9978d7edf61>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
AUDIO AND SLIDES
April 18, 2014
Audio and/or slides are available for talks given at the Fields Institute during the following events in the year July 2002 - June 2003.
For events from September 2012 onwards, plus selected events from June-August 2012, please see our video archive.
For events from other years, plus those June-August 2012 talks that are only available in audio format, please consult the audio/slides home page.
Various Dates Throughout the Year
June 2003
May 2003
April 2003
March 2003
February 2003
November 2002
October 2002
September 2002
August 2002
July 2002
Quantitative Finance Seminar Series
Back to 2002-2003 Event Listing
Applied Mathematics Colloquium
Back to 2002-2003 Event Listing
Coxeter Lecture Series
Back to 2002-2003 Event Listing
Colloquium Series on Mathematics Outside Mathematics
Back to 2002-2003 Event Listing
Graduate Course on Descriptive Set Theory
Back to 2002-2003 Event Listing
Graduate Course on Partition Theory and Banach Spaces
Back to 2002-2003 Event Listing
Graduate Course on Automorphic Functions
Back to 2002-2003 Event Listing
Graduate Course on Symmetric Power L-Functions And Applications To Analytic Number Theory
Back to 2002-2003 Event Listing
Graduate Course on L-functions, Converse Theorems, and Functoriality for GL(n)
Back to 2002-2003 Event Listing
Annual General Meeting (June 12, 2003)
Back to 2002-2003 Event Listing
Clay Mathematics Institute Summer School on Harmonic Analysis, The Trace Formula and Shimura Varieties (June 2-27, 2003)
Back to 2002-2003 Event Listing
Special Lecture: Manindra Agrawal (May 23, 2003)
Back to 2002-2003 Event Listing
Workshop on Automorphic L-functions (May 5-9, 2003)
Back to 2002-2003 Event Listing
Distinguished Lecture Series in Statistical Science (April 23-24, 2003)
Back to 2002-2003 Event Listing
Distinguished Lecture Series (April 9-11, 2003)
Back to 2002-2003 Event Listing
Workshop on Shimura Varieties and Related Topics (March 4-8, 2003)
Back to 2002-2003 Event Listing
Mathematics Online Meeting (February 27, 2003)
Back to 2002-2003 Event Listing
Second Annual Conference on Personal Risk Management (November 21, 2002)
Back to 2002-2003 Event Listing
Workshop on Geometry of Banach spaces and Infinite Dimensional Ramsey Theory (November 10-16, 2002)
Back to 2002-2003 Event Listing
Workshop on the Mathematics of Computer Animation (November 8-9, 2002)
Back to 2002-2003 Event Listing
Graduate School Information Day (November 2, 2002)
Back to 2002-2003 Event Listing
Workshop on Industry, Business, Mathematics and Computer Algebra (October 25, 2002)
Back to 2002-2003 Event Listing
CRM-Fields Prize Lecture (October 22, 2002)
Back to 2002-2003 Event Listing
FRSC Day (October 19, 2002)
Back to 2002-2003 Event Listing
Workshop on Descriptive Set Theory, Analysis and Dynamical Systems (October 6-12, 2002)
Back to 2002-2003 Event Listing
Special Lecture Series during the Thematic Program on Set Theory and Analysis (October 1-3, 2002)
Back to 2002-2003 Event Listing
Workshop on Categorical Structures for Descent and Galois Theory, Hopf Algebras and Semiabelian Categories (September 23-28, 2002)
Back to 2002-2003 Event Listing
"ADHOC-NOW" Conference on Ad-Hoc Networks and Wireless (September 20-21, 2002)
Back to 2002-2003 Event Listing
2002 Workshop on the Solution of Partial Differential Equations on the Sphere (August 12-15, 2002)
Back to 2002-2003 Event Listing
Workshop on Geometry, Dynamics and Mechanics in Honour of the 60th Birthday of J.E. Marsden (August 7-11, 2002)
Back to 2002-2003 Event Listing
Lecture Series during the Thematic Year on Numerical and Computer Challenges in Science and Engineering (July 31, 2002)
Back to 2002-2003 Event Listing
Workshop on Non Self-Adjoint Operator Algebras (July 8-12, 2002)
Back to 2002-2003 Event Listing
Audio and/or slides are available for talks given at the Fields Institute during the following events in the year July 2002 - June 2003.
For events from September 2012 onwards, plus selected events from June-August 2012, please see our video archive.
For events from other years, plus those June-August 2012 talks that are only available in audio format, please consult the audio/slides home page. | {"url":"http://www.fields.utoronto.ca/audio/02-03/","timestamp":"2014-04-18T19:01:09Z","content_type":null,"content_length":"74416","record_id":"<urn:uuid:6ce3c91e-f3e9-44b5-8590-6b41894142ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
N Richlnd Hls, TX Geometry Tutor
Find a N Richlnd Hls, TX Geometry Tutor
...My current focus is specifically on tutoring advanced chemistry and calculus with an emphasis on university and advanced high school placement courses in mathematics and science. Scholarship
Totals: is greater than $2,000,000 and growing. My goal is to have my students obtain more than $20,000,000 in scholarships by 2016.
93 Subjects: including geometry, reading, chemistry, English
...I have also worked as a high school teacher for six years and have taught mathematics, physics, science and computer science. My teacher registration is current. I have a very good
understanding of the difficulties that many students face and common misconceptions they have when learning new content.
56 Subjects: including geometry, chemistry, calculus, physics
...Before transferring to Trinity, I attended the United States Naval Academy in Annapolis, Maryland for three years. I was an applied math major at the academy, and finished my math degree at
Trinity University. I was a Division I club soccer player growing up in Dallas.
14 Subjects: including geometry, chemistry, ASVAB, SAT math
I graduated from Midwestern State University in 2008 with honors and a 3.8 GPA. My degree is in Education with a focus on 4th through 8th grade Math. I taught 7th and 8th grade Math for five
years at Central Junior High in HEB ISD.
3 Subjects: including geometry, elementary math, prealgebra
...Working at the student’s pace, I hope to help them achieve a sense of accomplishment, a more positive attitude towards learning, and confidence in their own intellectual abilities. I have
experience working with middle and high school students in the SES “No Child Left Behind” program tutoring s...
13 Subjects: including geometry, reading, English, algebra 1
Related N Richlnd Hls, TX Tutors
N Richlnd Hls, TX Accounting Tutors
N Richlnd Hls, TX ACT Tutors
N Richlnd Hls, TX Algebra Tutors
N Richlnd Hls, TX Algebra 2 Tutors
N Richlnd Hls, TX Calculus Tutors
N Richlnd Hls, TX Geometry Tutors
N Richlnd Hls, TX Math Tutors
N Richlnd Hls, TX Prealgebra Tutors
N Richlnd Hls, TX Precalculus Tutors
N Richlnd Hls, TX SAT Tutors
N Richlnd Hls, TX SAT Math Tutors
N Richlnd Hls, TX Science Tutors
N Richlnd Hls, TX Statistics Tutors
N Richlnd Hls, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/N_Richlnd_Hls_TX_geometry_tutors.php","timestamp":"2014-04-19T17:49:36Z","content_type":null,"content_length":"24262","record_id":"<urn:uuid:fc756336-6581-486b-9a6d-49cede88472a>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Springer Book: Modular Averager-Case Time Analysis
Michel Schellekens m.schellekens at cs.ucc.ie
Fri Sep 5 09:04:03 EDT 2008
The following recently published book may be of interest to FOM
A Modular Calculus for the Average Cost of Data Structuring
The book introduces a new mathematical theory of random bag preservation
which provides a unified foundation for algorithmic analysis, as an
alternative to the collection of ad hoc methods available in this area.
Mathematicians with an interest in finite partial orders, linear
extensions and random structures or foundations of Computer Science in
general, may find this work of interest.
Best wishes,
Michel Schellekens
A Modular Calculus for the Average Cost of Data Structuring
Schellekens, Michel
2008, XXIV, 246 p. 20 illus. With CD-ROM., Hardcover
ISBN: 978-0-387-73383-8
A Modular Calculus for the Average Cost of Data Structuring introduces
MOQA, a new domain-specific programming language which guarantees the
average-case time analysis of its programs to be modular. "Time" in this
context refers to a broad notion of cost, which can be used to estimate
the actual running time, but also other quantitative information such as
power consumption, while modularity means that the average time of a
program can be easily computed from the times of its
constituents--something that no programming language of this scope has
been able to guarantee so far. MOQA principles can be incorporated in
any standard programming language.
MOQA supports tracking of data and their distributions throughout
computations, based on the notion of random bag preservation. This
allows a unified approach to average-case time analysis, and resolves
fundamental bottleneck problems in the area. The main techniques are
illustrated in an accompanying Flash tutorial, where the visual nature
of this method can provide new teaching ideas for algorithms courses.
This volume, with forewords by Greg Bollella and Dana Scott, presents
novel programs based on the new advances in this area, including the
first randomness-preserving version of Heapsort. Programs are provided,
along with derivations of their average-case time, to illustrate the
radically different approach to average-case timing. The automated
static timing tool applies the Modular Calculus to extract the
average-case running time of programs directly from their MOQA code.
A Modular Calculus for the Average Cost of Data Structuring is
designed for a professional audience composed of researchers and
practitioners in industry, with an interest in algorithmic analysis and
also static timing and power analysis--areas of growing importance. It
is also suitable as an advanced-level text or reference book for
students in computer science, electrical engineering and mathematics.
Michel Schellekens obtained his PhD from Carnegie Mellon University,
following which he worked as a Marie Curie Fellow at Imperial College
London. Currently he is an Associate Professor at the Department of
Computer Science in University College Cork - National University of
Ireland, Cork, where he leads the Centre for Efficiency-Oriented
Languages (CEOL) as a Science Foundation Ireland Principal Investigator.
Written for:
Researchers and students interested in algorithm analysis, static
analysis, real-time programming, programming language semantics
* random structures
* real-time languages
* series-parallel data structures
* software timing/power analysis
* sorting and search algorithms
* static analysis
Prof. M. P. Schellekens
Science Foundation Ireland Investigator
Director of Centre for Efficiency-Oriented Languages (CEOL)
National University of Ireland, Cork (UCC)
Department of Computer Science
Center for Efficiency-Oriented Languages (CEOL)
Lancaster Hall
Little Hanover Street 6
Cork, Ireland
WWW: http://www.ceol.ucc.ie/
Tel. + 353 (0)21 490 1917
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-September/013030.html","timestamp":"2014-04-16T22:12:16Z","content_type":null,"content_length":"6566","record_id":"<urn:uuid:5afb19c7-9d43-4e34-a20e-1d901e3fcc44>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
It’s Not Scientific Parenting Without GraphsIt's Not Scientific Parenting Without Graphs
Early last year, we began marking SteelyKid’s height off on a door frame in the library. She occasionally demands a re-measurement, and Saturday was one of those days. Which made me notice that we
now have a substantial number of heights recorded, and you know what that means: it’s time for a graph.
The “featured image” above shows the results, and I’ve stuck in a zero-day point using her length at birth. The solid line in the graph is a linear fit to the data, because nothing could possibly be
wrong with that. According to the fit, we can project that she’ll reach a height of 3.0 meters a few months before her twentieth birthday, and since the R^2 value is 0.99, you know that’s solid
Her current height, for the record, I rounded up to 44.5 inches. And if you want to see just how big she’s gotten, here’s a photo from an after-dinner trip to the playground at what will be her
school in just a couple of weeks, when she starts kindergarten. Yikes.
I got The Pip to stand by the door frame on Saturday as well, and his official height was 33 inches. As it’s a single data point, though, it’s not worth graphing– you’ll have to wait until we
accumulate more data before we can compare the two in a truly scientific manner…
1. #1 Wes Morgan August 25, 2013
Now do an analysis of your grocery bills. *chuckle*
2. #2 Bee August 25, 2013
Interestingly the curve is indeed pretty much linear for quite some time before it stagnates, but I don’t think the length at birth should be on the linear trend. They grow faster in the first
months, see eg http://happybabyusa.files.wordpress.com/2011/05/growth-chart.jpg
3. #3 Michael Kelsey
SLAC National Accelerator Laboratory
August 25, 2013
Scientific parenting, indeed :-) If you were doing the regular post-natal checkups, you should have a whole bunch of data points up to around day 400 or so, and you’ll see that the actual curve
down there is roughly logarithmic.
When my daughter was born, I started recording height, weight, and head size in Excel. I also found the raw data from both the CDC and WHO which are used to generate the height-weight curves.
4. #4 RM August 25, 2013
Bee, I don’t think this is inconsistent with that burst phase effect. It looks like if you remove the t=0 point and refit the SteelyKid data, you’ll get an intercept of 65-ish rather than 50.
From the linked chart, burst phase accounts for ca. 10 cm. of height difference.
What we can conclude from this, though, is that Chad is a terrible parent. ;-) Either for malnourishing his kid during a crucial growth phase, or by being negligent enough to skip recording
crucial height data during the early period of this experiment. – He really needs to make it up to SteelyKid. Buy her a pony or something. It’s safer not to have a 2+ meter tall teen upset at | {"url":"http://scienceblogs.com/principles/2013/08/25/its-not-scientific-parenting-without-graphs/","timestamp":"2014-04-16T22:21:41Z","content_type":null,"content_length":"77400","record_id":"<urn:uuid:f345d3c2-8010-4156-bb72-61f284426364>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: use of rologit
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: use of rologit
From Phil Schumm <pschumm@uchicago.edu>
To statalist@hsphsun2.harvard.edu
Subject Re: st: use of rologit
Date Fri, 22 Aug 2008 08:47:13 -0500
On Aug 21, 2008, at 7:40 PM, Data Analytics Corp. wrote:
A potential client asked for a form of analysis sometimes done in market research called maximum-difference (maxdiff) analysis. Basically, consumers are presented with a series of product
attributes (say, four at a time or a "quad") and then are asked to select the one they prefer the most and the one they prefer the least. For a quad, the other two non-selected attributes just
fall in the middle. This is repeated several times with the attributes rotating following an experimental design. Since the attributes are rated, I was thinking of using rologit to estimate a
model. But then I found a statement in the reference manual that says that "rologit does not allow for other forms of incompleteness, for instance, data in which respondents indicate which of
four cars they like best, and which one they like least, but not how they rank the two intermediate cars." But this is exactly my problem. Does anyone know how to trick rologit, or know of an ado
file to handle this problem?
The rank ordered logit model may also be fit using -stcox-. Allison and Christakis [1] show how this approach can be generalized to handle the types of ties you describe above. Also, Jeroen Weesie
wrote a Stata command many years ago called -elogit- to fit this model (try - findit elogit-) -- I can't recall though how he handled the subject of ties.
-- Phil
[1] P. D. Allison and N. A. Christakis. Logit models for sets of ranked items. Sociological Methodology, 24:199–228, 1994.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-08/msg00915.html","timestamp":"2014-04-17T12:52:08Z","content_type":null,"content_length":"7071","record_id":"<urn:uuid:e7d71798-42c0-4525-8823-c0f3a4d671c7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry...test tomorrow!
January 10th 2008, 05:44 PM #1
Jan 2008
Trigonometry...test tomorrow!
Hello everyone so I have a math test tomorrow and im stuck on a review question. It's trigonometry in a functions class. I was just wondering if anyone would be able to figure it out.
heres the question
1. Given sinx + cosx = 2/3 , calculate the value of sin2x
any help will be appreciated.
$\sin x + \cos x = \frac 23$
$\Rightarrow (\sin x + \cos x)^2 = \frac 49$
$\Rightarrow \sin^2 x + 2 \sin x \cos x + \cos^2 x = \frac 49$
can you continue?
I understand how you got to the 3rd step however i still don't know what to do after that. Do I factor it? or is there an identity that I need to use to continue?
The double angle formula states that sin2x = 2*sinx*cosx. That may be helpful along with another trig identity. I suggest memorizing these. If its anything like my highschool trig classes, you
will be using them a lot. Many problems can be solved by using a well defined procedure for that given class of problems, combined with a trick. Many students know the procedure but have trouble
finding the trick. In this case, I would consider the identities the trick.
The double angle formula states that sin2x = 2*sinx*cosx. That may be helpful along with another trig identity. I suggest memorizing these. If its anything like my highschool trig classes, you
will be using them a lot. Many problems can be solved by using a well defined procedure for that given class of problems, combined with a trick. Many students know the procedure but have trouble
finding the trick. In this case, I would consider the identities the trick.
right you are.
@ aries: the other identity you MUST know is $\sin^2 x + \cos^2 x = 1$
now can you continue?
great! I'm pretty sure I can finish it now.
thanks for the help.
January 10th 2008, 05:56 PM #2
January 10th 2008, 06:21 PM #3
Jan 2008
January 10th 2008, 06:50 PM #4
Sep 2007
January 10th 2008, 06:55 PM #5
January 10th 2008, 07:01 PM #6
Jan 2008 | {"url":"http://mathhelpforum.com/trigonometry/25889-trigonometry-test-tomorrow.html","timestamp":"2014-04-16T14:33:53Z","content_type":null,"content_length":"45572","record_id":"<urn:uuid:2dd7fd20-8f7d-4ad7-8a6a-0a67b2071e55>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
To see the answer, pass your mouse over the colored area.
To cover the answer again, click "Refresh" ("Reload").
Do the problem yourself first!
Squares are drawn on each side of a right triangle.
2. a) Draw two squares. Now draw a third square equal to the two of
2. a) them together.
3. ABCD is a circle with center E, and EF, EG are perpendiculars to the
3. straight lines AB, CD respectively.
5. Proposition 48.
If the square drawn on one side of a triangle is equal to the squares
drawn on the other two sides, then the angle contained by those two sides
is a right angle.
In triangle ABC, let the square drawn on BC be equal to the squares drawn on the sides CA, AB;
then angle CAB is a right angle.
From the point A, draw AD at right angles to CA;
make AD equal to BA;
and draw CD.
(The proof will now show that triangles CAD, CAB are congruent; hence angle CAB is also a right angle. The student should justify each statement.)
Then, because AD is equal to BA, the square on AD is equal to the square on BA. I-46, Problem 5a.
To each of these join the square on AC.
Therefore the squares on AD, AC are equal to the squares on BA, AC. Axiom 2
But because angle CAD is a right angle, Construction
the square on CD is equal to the squares on DA, AC. I. 47
And, by hypothesis, the square on CB is equal to the squares on BA, AC.
Therefore the square on CD is equal to the square on CB. Axiom 1
Therefore the side CD is equal to the side CB. I-46, Problem 5b.
Now, because side DA is equal to side BA,
and side CA is common to triangles CAD, CAB,
the two sides CA, AD are equal to the two sides CA, AB respectively;
and we have shown that the base CD is equal to the base CB;
therefore angle CAD is equal to angle CAB. S.S.S.
But angle CAD is a right angle; Construction
therefore angle CAB is also a right angle. Axiom 1
Therefore, if the square etc. Q.E.D.
Previous proposition
Additional exercises
Table of Contents | Introduction | Home
Please make a donation to keep TheMathPage online.
Even $1 will help.
Copyright © 2012 Lawrence Spector
Questions or comments?
E-mail: themathpage@nyc.rr.com | {"url":"http://www.themathpage.com/aBookI/geoProblems/I-47Prob.htm","timestamp":"2014-04-20T13:19:32Z","content_type":null,"content_length":"8292","record_id":"<urn:uuid:f640853c-a491-4727-aea2-c8c1cdf50dea>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advogato: Blog for major
I have found numerous references to some number theory that seems to draw a parallel to my whole computing sum's scenerio that has been driving me crazy as of recent.
The key term is digital root. It seems that my reducing the values of a number to a single digit by adding together each of the digits in the number is computing the digital root. Okay, no biggi
there. There also seems to be a semi-weak relation to sigma codes, though I havn't quite decided yet if what I have run across is infact some property of sigma codes or not.
Originally I had used a double itterative loop to compute the digital root, but I have run across a much faster way of computing it for base10 (the 9 can be replaced with baseN - 1):
dr = 1 + (((n ** x) - 1) % 9)
And I have run across a curious conjecture in my recent insanity into this whole thing:
dr(n ** x) = dr(n ** dr(x))
It also seems that: if dr(n) == dr(i) then dr(n ** x) = dr(i ** x)
I am still trying to piece together a proof for this with a friend at work. As soon as we come up with one (more likely he will do all the work here, I am horrible at proofs), we have decided to see
about finding a proof that describes the reproducable patterns that occure. And then it is off to explain why it is that predictable powers in the pattern intersect at a digital root of 1 at the same
time, and a seperate predictable set of powers never reduce 1, and why it is that these powers are always a fixed distance apart. | {"url":"http://www.advogato.org/person/major/diary/1.html","timestamp":"2014-04-17T07:27:37Z","content_type":null,"content_length":"5012","record_id":"<urn:uuid:07daf516-fc1c-4980-8c5c-9083a75d54c9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
FRM Exam Archives - Finance Train
This is one of the most commonly debated questions among students who have made up their mind about furthering their career and adding a designation such as CFA, FRM, or CAIA to their name. Making
the right choice can be crucial for a couple of reasons: 1) Each of these certifications are expensive, for example, […] | {"url":"http://financetrain.com/category/frm-exam/","timestamp":"2014-04-16T18:56:30Z","content_type":null,"content_length":"71951","record_id":"<urn:uuid:576a9ff0-c5c1-4f54-9565-de684868878f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Graphics Lighting Transformation
A sphere can be transformed into a spheroid by either a scale transformation or by changing the plot range while keeping the bounding box ratios constant. These transformations are specified by the
spheroid control setting, which gives the vertical over horizontal semi-axis ratio of the resulting spheroid. A control setting less than 1 gives an oblate spheroid, greater than 1 a prolate
spheroid, and equal to 1 an untransformed sphere.
Though a scale and a plot range transformation can generate the same spheroid, these transformations differ in their effect on lighting. The scale transformation leaves the lighting direction fixed,
but the plot range transformation applies the same transformation to both the sphere and the light source. This distinction is shown by introducing two light sources: a fixed cyan light source that
is excluded from the sphere to spheroid transformation and a transformed magenta source that is included in the transformation.
The light angle control specifies a 0 to radian inclination angle of these light sources before any transformation. The cyan light source direction is simply specified by the light angle control
without any transformation effect and is thus independent of the spheroid control setting. On the other hand, the magenta light source direction is determined by applying the spheroid transformation
to a lighting direction vector with an inclination angle specified by the light angle control. This magenta lighting direction thus depends on both spheroid and light angle control settings.
The cyan and magenta light sources have the same direction if the light angle is 0, /2, or , or if the spheroid setting is 1, which are all situations where the spheroid transformation does not
change the direction of the magenta source. The light source control toggles the light sources on and off. The cyan source is controlled by the fixed button, so named because this light source does
not transform, and the magenta light source is controlled by the transformed button, so named because this source is transformed.
The diffuse reflected light intensity of a surface that obeys Lambert's law depends on the cosine of the angle between the surface normal and the light source direction. This means that the surface
appears brightest at the point where a light source is normally incident. The normal incident points for the fixed cyan light source and transformed magenta light source are indicated by cyan and
magenta arrows on the spheroid's surface. It is evident that even though the lighting direction of the cyan source is fixed by the light angle control, its point of normal incidence on the spheroid
varies with the spheroid control setting.
The details setter bar selects the main view (snapshot 1) and the basic and advanced detail views (snapshots 2 and 3), which show 2D cross-sections defined by the intersection of the vertical
spheroid symmetry axis and the lighting direction. The left side of the basic view shows the fixed cyan light source and the spheroid generated by a scale transformation. The right basic view shows
the transformed magenta light source and the spheroid generated by changing the plot range. The left and right cross-sections also indicate the normal incidence points and terminators for the cyan
and magenta light sources. As the spheroid control is varied, it is evident that the left axis labels, plot range, and cyan lighting direction remain constant because this spheroid transformation
involves only scaling the sphere, whereas the right axis labels and magenta light direction vary continuously with the spheroid setting because this spheroid transformation changes the plot range.
Finally, the black radial vector on the right basic view is useful for visual confirmation that the magenta light source is a constant before transformation. As the spheroid control moves through
the oblate range, it is possible to visualize a 3D disk rotating about the horizontal axis and imagine that the radial vector is fixed on the disk, and likewise in the prolate range to visualize a
3D disk with fixed radial vector rotating about the vertical axis.
The advanced details view (snapshot 3) further elaborates the basic view's left side scale and right side plot range sphere to spheroid transformations. The right side advanced view doesn't actually
include the plot range transformation; instead it shows the sphere cross-section, light sources, normal incidence points, and terminators before applying the plot range transformation. The sphere
cross-section is just that, and the magenta light source simply needs to be fixed at the angle defined by the light angle control and its terminator is perpendicular to this angle. The magenta light
source is referred to as the transformed light source precisely because it naturally undergoes the same sphere to spheroid transformation created by changing the plot range at constant bounding box
ratios. Of course, the point of normal incidence of the magenta light on the spheroid is not simply given by transforming its point of normal incidence on the sphere. Furthermore the direction and
length of the arrows marking the point of normal incidence must be so constructed that they are of correct length and tangent or normal after the plot range transformation. As is immediately evident
in the advanced details view, it also differs from the basic details view in that both the cyan and magenta light sources are shown on both the left and right side of the view. For example, the left
side advanced view starts with the basic view fixed cyan source and adds the transformed magenta light source. Because all objects and light sources on the left side are individually scaled, this
means that the sphere to spheroid transformation must be manually applied to the magenta source as already described above in the main view caption: "the magenta light source direction is determined
by applying the spheroid transformation to a lighting direction vector with an inclination angle specified by the light angle control." The right side advanced view essentially swaps the lighting
procedure. It starts with the basic details view transformed magenta source, but this source is fixed at the inclination angle specified by the light angle control and does not rotate with changes
in the spheroid control because the right advanced view shows everything before applying the plot range transformation. To add the fixed cyan source, the inverse sphere to spheroid transformation
must be applied to a lighting direction vector with an inclination angle specified by the light angle control. Thus on the right side of the advanced details view, the magenta light source will be
fixed and the cyan source will be seen to rotate as the spheroid control setting is changed.
All the transformations appearing in the source code are derived from the following definitions and equations. A point on the elliptical cross-section through the vertical symmetry axis of a
spheroid can be simply parameterized by the angle , such that a point is given by the action of a positive valued diagonal matrix on a unit angle vector :
, , , .
The unit angle vector component is given by and the component by because the inclination angle is measured from the vertical. The diagonal components of the matrix are the spheroid equatorial radius
and polar radius
or equivalently, the ellipse horizontal radius and vertical radius . A normal vector at a point is given by a 90° rotation of the point vector partial derivative with respect to the parameter:
, ,
The scale factor sets the normal vector exactly equal to the action of the inverse diagonal matrix on the unit angle vector and makes explicit the symmetry of a spheroid point and its normal, which
are respectively generated by the action of the diagonal matrix and its inverse on the angle vector. The above normal vector equation essentially gives a relation between the light angle and the
point of normal incidence. To complete this relation, let curly be the light inclination angle from vertical and set the corresponding unit angle vector equal to the normalized normal vector:
Given a parameterized point on a spheroid, this equation gives the light angle that is normally incident on that point. In practice we usually want the inverse angle relation, that is, given the
light angle curly find the normal incidence point angle parameter plain . The inverse relation is easily derived by left-multiplying both sides of the above curly equation by the diagonal matrix and | {"url":"http://www.demonstrations.wolfram.com/GraphicsLightingTransformation/","timestamp":"2014-04-19T17:26:47Z","content_type":null,"content_length":"55197","record_id":"<urn:uuid:b5e691cb-07c7-47f2-9447-624e49b8296c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consistent Minimization of Clustering Objective Functions
Ulrike v. Luxburg, Sebastien Bubeck, Stefanie Jegelka and Michael Kaufmann
In: NIPS 2007, Vancouver, Canada(2008).
Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the
statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true
partition of the under- lying space. We argue that the discrete optimization approach usually does not achieve this goal. As an alternative, we suggest the paradigm of “nearest neighbor clustering”.
Instead of selecting the best out of all partitions of the sample, it only considers partitions in some restricted function class. Using tools from statistical learning theory we prove that nearest
neighbor clustering is statistically consis- tent. Moreover, its worst case complexity is polynomial by construction, and it can be implemented with small average case complexity using branch and
EPrint Type: Conference or Workshop Item (Paper)
Project Keyword: Project Keyword UNSPECIFIED
Subjects: Theory & Algorithms
ID Code: 3905
Deposited By: Ulrike Von Luxburg
Deposited On: 25 February 2008 | {"url":"http://eprints.pascal-network.org/archive/00003905/","timestamp":"2014-04-18T18:14:38Z","content_type":null,"content_length":"6927","record_id":"<urn:uuid:fc262f41-2817-4aa4-ba29-33f64d04a4da>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
measurement of inrush current of transformer
Some Information About
measurement of inrush current of transformer
is hidden..!! Click Here to show measurement of inrush current of transformer's more details..
Do You Want To See More Details About
"measurement of inrush current of transformer"
? Then
.Ask Here..!
with your need/request , We will collect and show specific information of measurement of inrush current of transformer's within short time.......So hurry to Ask now (No Registration , No fees ...its
a free service from our side).....Our experts are ready to help you...
.Ask Here..!
In this page you may see measurement of inrush current of transformer related pages link And You're currently viewing a stripped down version of content. open "Show Contents" to see content in
proper format with attachments
Page / Author tags
Posted by: vvvkkk Measurement of inrush current in Transformer , Measurement, inrush current limitcurrent , inrush current limitinrush current limiters, inrush current of transformer ,
Created at: Wednesday transformer inrush current, current inrush , inrush current transformer, transformer inrush current calculation , usb inrush current, inrush , inrush current limiter, inrush
14th of July 2010 current limiters , Transformer, measurement of inrush current in transformer project , measurement of inrush current of transformer, ppt of measurement of rush current in
09:45:37 AM transformer , measurement of transformer inrush current, magnetic inrush current in transformes ppt , measurement of inrush current in transformer, project on measurement of
Last Edited Or Replied inrush current in transformer , inrush current of transformer ppt file, measurement of inrush current in transformer project details , project of measurement of inrush
at :Thursday 25th of current of a transformer, inrush current in a transformer ,
October 2012 06:14:05
plz send me pdf fil..................[:=> Show Contents <=:]
Posted by: ASAD inrush current control , inrush current calculation, high inrush current breaker , inrush current bypass, inrush current ballast , inrush current basics, inrush current
Created at: Friday breaker , inrush current and circuit breakers, transformer inrush current analysis , inrush current and motors, inrush current astrodyne , inrush current amplitude, inrush
20th of November 2009 current and capacitor , inrush current ac motors, inrush current and circuit breaker , inrush current amps, inrush current analysis , inrush current fuse, inrush current
01:46:42 AM formula , inrush c, measurement of inrush current in transformer project , current transformer, paper presentation on measurement of inrush current in a transformer ,
Last Edited Or Replied measurement of inrush current in transformer, transformer inrush current ppt , power transformer inrush current seminar report, seminar on inrush current in trasnformer ,
at :Saturday 21st of measurement of inrush current of transformer, measurement of inrush current in a transformer , measurement of inrush current in transformer pdf, inrush current measurment of
November 2009 09:20:39 transformer , measurement inrush current transformer, measurement of transformer inrush current , how to measure the in rush current in transformers,
start helping me but i m foucusing about formolas like if i m designing 100KVA transformer 11000/433 50 HZ so i want to know about it inrush current how much this transformer should take starting
current..................[:=> Show Contents <=:]
Posted by:
Created at: Thursday measurement of inrush current in transformer , ppt on measurement of inrush current in transformer, measurement of inrush current in transformer project , measurement of
15th of November 2012 inrush current in transformer mini project, transformer inrush current project , measurement of inrush current in transformer projects synopsis, inrush current in a
03:26:02 AM transformer , inrush current in power transformer with full seminar report, inrush current , inrush current measurement in transformer project, inrush current in transformer
Last Edited Or Replied project report , measurement of inrush curent in transformer project, project to measure inrush current in transformer ,
at :Friday 16th of
November 2012 01:52:52
i need the basic ckt daigram of the prjct so that it can b easily implemented pra..................[:=> Show Contents <=:]
Posted by: vvvkkk Measurement of inrush current in Transformer, Measurement , inrush current limitcurrent, inrush current limitinrush current limiters , inrush current of transformer,
Created at: Wednesday transformer inrush current , current inrush, inrush current transformer , transformer inrush current calculation, usb inrush current , inrush, inrush current limiter , inrush
14th of July 2010 current limiters, Transformer , measurement of inrush current in transformer project, measurement of inrush current of transformer , ppt of measurement of rush current in
09:45:37 AM transformer, measurement of transformer inrush current , magnetic inrush current in transformes ppt, measurement of inrush current in transformer , project on measurement of
Last Edited Or Replied inrush current in transformer, inrush current of transformer ppt file , measurement of inrush current in transformer project details, project of measurement of inrush current
at :Thursday 25th of of a transformer , inrush current in a transformer,
October 2012 06:14:05
plz send me pdf file for abo..................[:=> Show Contents <=:]
Posted by: ASAD inrush current control, inrush current calculation , high inrush current breaker, inrush current bypass , inrush current ballast, inrush current basics , inrush current
Created at: Friday breaker, inrush current and circuit breakers , transformer inrush current analysis, inrush current and motors , inrush current astrodyne, inrush current amplitude , inrush
20th of November 2009 current and capacitor, inrush current ac motors , inrush current and circuit breaker, inrush current amps , inrush current analysis, inrush current fuse , inrush current
01:46:42 AM formula, inrush c , measurement of inrush current in transformer project, current transformer , paper presentation on measurement of inrush current in a transformer,
Last Edited Or Replied measurement of inrush current in transformer , transformer inrush current ppt, power transformer inrush current seminar report , seminar on inrush current in trasnformer,
at :Saturday 21st of measurement of inrush current of transformer , measurement of inrush current in a transformer, measurement of inrush current in transformer pdf , inrush current measurment of
November 2009 09:20:39 transformer, measurement inrush current transformer , measurement of transformer inrush current, how to measure the in rush current in transformers ,
thanks for start helping me but i m foucusing about formolas like if i m designing 100KVA transformer 11000/433 50 HZ so i want to know about it inrush current how much this transformer should take
starting current..................[:=> Show Contents <=:]
Cloud Plugin by Remshad Medappil | {"url":"http://seminarprojects.net/c/measurement-of-inrush-current-of-transformer","timestamp":"2014-04-17T03:51:21Z","content_type":null,"content_length":"31003","record_id":"<urn:uuid:7099aeec-666b-4c36-8a6b-0e342f75c5ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Functions! f(x)=
January 5th 2009, 01:21 AM
Functions! f(x)=
ok so i have my core 1 exam on friday and am struggling with SOME functions.
i am at work and dont have the question for reference but its alogn the lines of
(x-2) is a factor of f(0)
find the values of k and c in f(x)=2x^2+kc+c
thanks in advance
January 5th 2009, 01:48 AM
I'm not sure but did you mistype your question, I think it should be something like:
Can you remember what other information you are given about the roots of the equation?
With questions like these you usually use the discriminant, $b^2-4ac$.
For equal roots, $b^2-4ac=0$
For diffferent real roots, $b^2-4ac >0$
For no real roots, $b^2-4ac<0$
I know this doesn't answer your question but hope it helps you when you have the actual question in front of you (Wink)
January 5th 2009, 01:54 AM
it very possibly said somethign along the lines of f(x)=6
if that doesn't mean anything i will have al ook when i get home and retype on this post with the whole question and how it is worded
January 5th 2009, 01:59 AM
Yeh post the actual question when you get home and I'll try and get my head round it then ;)
January 5th 2009, 08:11 AM
the actual question!
You are given that f(x) = x^3 + kx +c. The value of f(0) is 6, and x-2 is a factor of f(x).
find the values of k and c
there we go, helps to have what your asking in front of you lol, thanks in advance :D
January 5th 2009, 11:43 AM
mr fantastic
f(0) = 6 => 6 = c.
(x - 2) is a factor => f(2) = 0 => 0 = 2^3 + 2k + c
And now you should be able to finish this.
January 5th 2009, 11:47 AM
thanks, can you explain to me why c is 6? is it to do with >0 and <0 ?
0 = 2^3 + 2k + 6
0 = 8 + 2k + 6
-14 = 2k
-7 = k ?
thanks again
January 5th 2009, 11:51 AM
mr fantastic
January 5th 2009, 11:51 AM
Ah this isn't as hard as I first thought :)
You have the following equation
$x^3 + kx +c$
and you are told that $f(0) = 6$
You can therefore work out that then you sub 0 into the equation it equals 6, giving you:
$0^3 + 0k +c = 6$
therefore $c = 6$
Reading on from this you are told that x - 2 is a factor, therefore subbing 2 into the equation will give you zero.
$2^3 + 2k + 6 = 0$
The rest I'm sure is clear :)
Hope this explains it
January 5th 2009, 11:52 AM
Sorry I've got beaten to the answer ;)
January 5th 2009, 11:57 AM
Thanks guys!
not as hard as i thought, just never had functions explained to me an got confused xD
thanks again! | {"url":"http://mathhelpforum.com/pre-calculus/66893-functions-f-x-print.html","timestamp":"2014-04-17T21:58:28Z","content_type":null,"content_length":"11319","record_id":"<urn:uuid:c3b55221-5904-41ba-876d-c4eb2ccfe435>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
FQXi - Foundational Questions Institute
Dr. Ken D. Olum Tufts University Project Title
Does General Relativity Permit Exotic Phenomena?
Project Summary
Is it possible to create a stable wormhole, or to travel faster than light or backward in time? In general relativity, matter and energy curve space-time. If the space-time were curved enough and in
the right way, such exotic things would be possible. Is there any matter and energy that would produce such curvature? It would have to have negative energy density. That is unlike nearly every kind
of matter and energy that we know, but quantum mechanics does sometimes give rise to negative energy densities, for example between parallel plates in the Casimir effect. But it is not enough just to
have negative energy density in a few places. Rather, the energy density added up along the complete path of a light ray must be negative. We already showed that this is impossible in flat
space-time, even when there are boundaries, as in the Casimir system. The present project intends to show that even in curved space-time it is impossible to have the required negative energy, and
thus that one cannot construct a wormhole or a time machine, or travel faster than light.
Back to List of Awardees | {"url":"http://fqxi.org/grants/large/awardees/view/__details/2008/olum","timestamp":"2014-04-21T07:05:48Z","content_type":null,"content_length":"13166","record_id":"<urn:uuid:1324a460-b52e-4c96-a92e-a65ae581b8f6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 827
personal finances
nevermind the answer is b thanks for the answer NOT
personal finances
umm can someone help me please
personal finances
A family with $45,000 in assets and $22,000 of liabilities would have a net worth of: A. $45,000. B. $23,000. C. $22,000. D. $67,000. im thinking d
The mad river flows at a rate of 2 KM/H. In order for a boat to travel 79.2 KM of River and then return in a total of eight hours how fast must about travel in Still water?
c is correct I just took my exam on ashworth and it said c is the answer
the answer is c
A person should not drink at all if he or she is: A. 21 and over B. Jobless C. Breastfeeding D. None of the above I got D
I think its c
Signs of a sleep disorder include? A. Trouble staying awake during the day B. Pauses in breathing during sleep C. Both A and B D. Coughing and sneezing
What is the Affordable Care Act? A. A program that allows people who are at higher risk for chronic diseases to gain free memberships to gyms and fitness facilities. B. A program that allows people
who are at lower risk for chronic diseases to gain free memberships to gyms and...
percentages/word problem
Carousel Toys has Romper Buckaroos, wooden rocking horses for toddlers, on a 30% markdown sale for $72.09. What was the original price before they were marked down?
percentages/word problem
A refrigerator costs a company $1,600 to manufacture. If it sells for $3,456, what is the percent markup based on cost? (Round to the nearest whole percent.)
percentages/word problem
A wholesaler buys plastic lamps for $18 each and sells them for $30.60. What is the percent markup based on cost?
percentages/word problem
The Quick Plumbing Supply sells a set of faucets for $79.75. If a 45% markup based on cost was used, what was the cost of the faucets to Quick Plumbing? (Round to the nearest cent.)
percentages/word problem
The Soft Step Floor Company purchases carpeting for $5.12 per square yard and marks it up 55% based on the selling price. What is the selling price per square yard? (Round to the nearest cent.)
percentages/word problem
Beacon Electronics advertised a portable TV set on sale for $272.49 after a 35% markdown from the original price. What was the original price of the TV set? (Round to the nearest cent.
percentages/word problem
The Sweet Candy Shop buys 600 pounds of chocolate covered strawberries at $5.59 per pound. If a 10% spoilage rate is anticipated, at what price per pound should the strawberries be sold in order to
achieve a 60% markup based on cost?
percentages/word problem
18) In October, a hardware store purchased snow shovels for $8 each. The original markup was 50% based on the selling price. In December, the store had a sale and marked down the shovels by 20%. By
January 1, the sale was over and the shovels were marked up 15%. In March, the ...
18) In October, a hardware store purchased snow shovels for $8 each. The original markup was 50% based on the selling price. In December, the store had a sale and marked down the shovels by 20%. By
January 1, the sale was over and the shovels were marked up 15%. In March, the ...
If you try to lift a 2 x 10 plank by applying 25 N force but plank does not move did you do any work?
How many moles of sulfur are present in 2.6 moles of Al2(SO4)3?
Melanie can paint a 100 square foot wall in 4 hours. James can paint the same area in 3 hours. If they both paint nonstop for 6 hours, about how many square feet can Melanie and James paint together?
On the average five customers visit a children department store each hour. Using Poisson distribution, calculate the probability of three customers shop at the store.
Mr. Anderson has a whirling sprinkler for watering his lawn. The sprinkler waters a circle whose length from the sprinkler to the edge of what is 12 meters. Find the area of the lawn that he can
sprinkle at one time. I know it says find the area but my teacher said something a...
lim([-1/(x+2)]+1/2)/x as x->0
Consumer Math
A pyramid composed of four equilateral triangles, called a tetrahedron, has a one-side length of 5 meters. What is its surface area? Round the answer to the nearest tenth.
Which of the following statements about the relationship between yield to maturity and bond prices is FALSE? A. When the yield to maturity and coupon rate are the same, the bond is called a par value
bond. B. A bond selling at a premium means that the coupon rate is greater th...
Assume that you are 23 years old and that you place $3,000 year-end deposits each year into a stock index fund that earns an average of 9% per year for the next 17 years. 1. How much money will be in
the account at the end of 17 years? 2. How much in account 15 years later at ...
Your trust fund will pay you $100.000 in six years when you turn 25. A shady institute has encourage you to sign over in exchange of cash today. Would you prefer a 8% or 10% discount rate to
determine value of lump sum payment ? A. Use 8% the lump sum is $62, 741 is greater th...
Which of the following actions wiling DECREASE the present value of an investment. A. Decrease the interest B. Decrease the future value C. Decrease the amount of time D. All of the above will
increase the present value
Financial Management
Which of the following is NOT an example of an agency cost? A. Paying an accounting firm to audit your financial statements. B. Paying an insurance company to assure that building codes have been met
for new construction. C. Paying a landscaping firm to maintain your firms D. ...
What number can play next after winning-15-46-03-27-51 machine-70-07-76-43-08
You have administrative rights to your computer running windows 7. You cannot find some of the operating system files on your system volume. What to do to make the appear? A. Uncheck hide protected
operating system Giles in folder options. B. Uncheck hide protected operating s...
Your company wants to implement a 5-year rotation plan of your computing devices. What percentage of your computer fleet will you have to replace every year in order to achieve this? A. 5 B. 10 C. 20
D. 50 Is the answer C 20
Which of the following statements about interest groups is correct? A. Agricutural business interest have a very short history of influential lobbying activity. B. Labor unions are important
financial supports of the Republican Pary. C. Religious groups never try to influence ...
What graph would you use in a favorite food survey situation? A. Bar graph B. Line graph C. Pie chart D. Other chart
A group of retired plant workers seeking to maintain company health benefits might turn to which of the following for help? A. The AFL-CIO B. National Association of Manufacturers C. U.S. Chamber of
Commerce D. American Medical Associaton Is the Answer B ?
American Politics
okay, its A. FOX
American Politics
Which of the following rapidly rose to become a significant conservative voice among the TV networks? A. FOX B. ABC C. CNN D. CBS Is the Answer C ?
No American Politics
The least likely contributor to Democratic candidates would be. A. National Organization of Women B. United Auto Workerd C. National Association of Manufacturers D. Sierra Club The Answer is D
I will say B
The typical Democratic voter A. supports limited government B. supports civil rights C. is a corporate CEO D. favors fundamentalist Christian social positions The answer is D
American government
The 1952 and 1956 Eisenhower elections are examples of A. maintaining B. dealigning C. deviating D. realigning Is d the correct answer
American Government
The 1952 and 1956 Eisenhower elections are examples of A. maintaining B. dealigning C. deviating D. realigning Is d the correct answer
American Government
Television advertisers tend to prefer programs that are A. Intellectually thought-provoking B. conventional and inoffensive C. Controversial and challenging D. Overly concerned with factual
American government
Political action committee contibutions typically A. are donated at the request of an elected official. B. go to incumbent candidates of both parties. C. go to the challenger to promote more
competitive elections D. both A and B are correct The answer is d ?
American Government
The Supreme Court relies on support from A. It's enormous prestige. B. Democratic majorities in Congress C. The obedience of state courts D. Republican majorities in Cingress Is the answer C
American History
A recent trend in addressing problems such as public health and youth violence has been A. government consolidation and cooperation. B. home rule charters C. commission governments D. town meetings
American History
American History
Which of the following was ruled to be a violation of the separation-of-church-and-state princile of the First Amendment? A. Decorating city streets with stars and trees during the Christmas season.
B. Exemption of religious property from taxation. C. Placement of a Ten comman...
English - concrete nouns
Is there anyway a sentence can be written to portray the word "job" as a concrete noun? Or is "job" always going to be an abstract noun?
Which of the following statements is true regarding social security retirements benefits A. Not all occupations are covered B. It attempts to replace 42% of you average earnings. C. All of the aove
You and your spouse both have good retirement plans through work,and both receive social security checks based on your own incomes. If your goal is to maximize the size of your annuity check every
month, which option should you choose? A. single life annuity B. annuity for lif...
What advantages does a roth IRA have over the traditional IRA? A. With a roth you take care of taxes ahead of time and end up with more money to spend at retirement. B Unlike the traditional IRA,
with a roth you dont pay taxes while your money is in the IRA. C. Early withdrawa...
When planning the retirement payout, there are several options from which to choose. With ______________ option, the annuity provides payments over the life of both you and yor spouse. A. lump sum
annuity B. single life annuity C. annuity for life D. combination annuty Is the ...
Personal Finance
Personal Finance
On December 1, 2004 a $1,000.00 bond, paying 6% interest on January 1st and July 1st of each year is purchased for $950.00. The bond is sold on December 5, 2005 for $980.00. What would be the total
monetary return including both interest and capital gains from holding this bon...
A locomotive that has a mass of 6.4 x 105 kg is used to pull 2 railway cars. Railway car 1 has a mass of 5.0 x 105 kg, is attached to railway car to that has a mass of 3.6 x 105 kg by a locking
mechanism. A railway engineer tests the mechanism and estimates that it can only wi...
r over 2 + 6 =5
How did you get the answer 4elm leaves and20oak leaves and oak8 maple leaves
How did you get the answer 4elm leaves and20oak leaves and oak8 maple leaves
mx + nx = p
A 2.86 kg block starts from rest at the top of a 27.5◦ incline and slides 1.94 m down the incline in 1.43 s. The acceleration of gravity is 9.8 m/s2. What is the speed of the block after it slid
the1.94 m? Answer in units of m/s
lim x -> infinity for (sin^2x)/(x^2-1)
Calculus (first year uni)
Hello lim x->pi^- for cot(x) [In words,this is the limit of x as it approaches pi from the negative direction for the function cot(x). I am very confused as to how this occurs and turns out to be
negative infinity. Thanks.]
What are 2 purposes of the exposition of The Most Dangerous Game?
biology need help
Yeah it's ecosystem.
134 m/s a=vt a=26.83s 5.00m/s^2
Algebra 2
V=df-di/tf-ti; solve for ti Literal equations! I would really appreciate it if you posted the step by step procedure of this problem so I can understand it! Many thanks!
Algebra 2
V=df-di/tf-tf; solve for tf Please show step by step procedure so I can understand how to solve this. Many thanks.
block a 10 kg is connected to block b 20 kg by a mass less string on a horizontal surface (coefficient of kinetic friction 0.1). block b is connected by another string to a hanging mass M through a
mass less pulley. what is the minimun value of M that will make the system move?
Assume that earth and mars are perfect spheres with the radii of 6,371km and 3,390 km, respectively. Calculate the surface area of earth. b)calculate surface area of mars
Draw the product of the hydration of 2-butene.
Draw the product of the hydration of 2-butene.
early childhood
Routines, transitions, and group times have one thing in common, which is: A. change. B. a sense of certainty. C. emotional boredom. D. focused behavior.
Physics-Change in Angular Momentum
The planet has mass M and is spinning in a counterclockwise direction. I know the asteroid will increase the momentum of the planet, so the total change in angular momentum is equal to positive.
m*v*R*sin(40) ?
Physics-Change in Angular Momentum
The planet has mass M and is spinning in a counterclockwise direction. I know the asteroid will increase the momentum of the planet, so the total change in angular momentum is equal to positive.
m*v*R*sin(40) ?
Physics-Change in Angular Momentum
What is the change in angular momentum of a planet with radius R if an asteroid mass m and speed v hits the equator at an angle from the west, 40 degrees from the radial direction?
What is the change in angular momentum of a planet with radius R if an asteroid mass m and speed v hits the equator at an angle from the west, 40 degrees from the radial direction?
Life orintation
Explain 5 factors that contribute to the disease HIV AIDS
Asirin (acetyl salicylic acid)is formed when salicylic acid reacts with acetic acid in a strong acid medium.This reaction converts the hydroxyl group on the salicylic acid to an ester while the
benzoic acid group on the salicylic acid survives the reaction. Therefore, the answ...
Compare the standard deviation for the heights of females 2.533 and males 4.21
Algebra - REINY
Hi - Do you think this is correct? BC = AD Area of Rectangle = (FB+AF).BC Area of Rectangle = 72 units squared Area = 1/2 bh Area 1 = 1/2 FB.BC Area 2 = 1/2 AF.AD 1/2 = AF.BC A1 + A2 = 36 units
squared 1/2 FB.BC + 1/2 AF.BC 1/2 [FB.BC + AF.BC] 1/2 [BC (FB + AF) 1/2.72 units sq...
BA is shorter side of rectangle
BF is between BA
Rectangle ABCD has an area of 72 units squared. The length of BF is 4 units. Algebracially prove that the sum of the areas of triangles CBF and ADF are equal to 36 units squared.
2.06 X 10^25 formula units of aluminum sulfate would have what mass?
1040 grams of chlorine gas at STP would have what volume?
If you are given 718 grams of lead (iv) oxalate, how many atoms of oxygen do you have?
2 1/4 hours plus 1 1/2 hours equals ?
When does DNA synthesis occur???
A case study on coastal management..... 300 words or more please ;[
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=anthony","timestamp":"2014-04-18T22:26:53Z","content_type":null,"content_length":"27559","record_id":"<urn:uuid:ecd2ad00-ee73-4a34-b286-c4dd7d928695>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
SymMath Application
Introduction to Franck-Condon Factors©
Theresa Julia Zielinski
Department of Chemistry, Medical Technology, and Physics
Monmouth University
West Long Branch, NJ 07764-1898
United States
mail to: tzielins@monmouth.edu
George M. Shalhoub
Department of Chemistry
La Salle University
Philadelphia, PA 19141-1199
United States
mail to: shalhoub@lasalle.edu
Background Document
The goal of this document is to provide students with an introduction to Franck-Condon Factors and the relationship of these factors to vibronic spectroscopy. The document contains a very brief
introduction to Franck-Condon factors through a sequence of guided inquiry type exercises. Specifically, students are asked to use potential energy diagrams for a diatomic molecule to examine a
transition in the diatomic molecule from a ground electronic state to an excited electronic state including consideration of the vibrational levels of each state. The overlap of vibrational
wave functions is used to introduce Franck-Condon factors. All of the exercises in this document are done with pencil and paper as preparation for more detailed work to be done in the companion
computational document FrankCondonComputation, The Franck-Condon Factors.
Computation Document
The goal of this document is to introduce some of the mathematical models that are used to describe vibronic spectra (vibration-electronic spectra) of molecules. After completion of a study of
the material presented in this document and the exercises distributed throughout the document, students are expected to be able to describe the relationship between Franck-Condon factors and
vibronic spectra, explain the key factors determining the magnitude of the Franck-Condon factors, compute Franck-Condon factors, and construct a vibronic spectrum for a molecule from a
knowledge of the wavelength of the electronic transition and the number and the relative intensity of the vibronic peaks. Along the way students are introduced to the idea of using a complete
orthonormal set of functions, in this case the harmonic oscillator functions, as a linear combination to represent another function. The final focus of the document is the simulation of spectra
in terms of the Franck-Condon factors. The document contains many student exercises and it is highly annotated for the novice Mathcad user. Faculty using this document can easily modify the
document to increase student interaction with the mathematical content.
These documents build on the concepts learned in the "Exploring Orthonormal Functions." and "Introductory Explorations of the Fourier Series." documents that are found at this Web site. In this
document the first seven harmonic oscillator states for the excited state of a molecule are used to write the linear combination expression for the ground state lowest vibration wave function.
The overlap between the ground state function and the excited state is computed and leads directly to the Franck-Condon factors. The sum of the squares of the deviations is evaluated for
different linear combinations and the best linear combination chosen by the student. The last activity is the simulation of a sample spectrum using the computed Franck-Condon factors. Students
can see the effect of the full width at half maximum on the shape of the Gaussian functions that lead to the computed spectrum.
Audiences: Upper-Division Undergraduate
Pedagogies: Computer-Based Learning
Domains: Physical Chemistry
Topics: Mathematics / Symbolic Mathematics, Quantum Chemistry, Spectroscopy, UV-Vis Spectroscopy
File Name Description Software Type Software Version
FranckCondonBackground.mcd Mathcad Computational Document Mathcad
FranckCondonBackground.nb Mathematica Computational Document added August 2003 Mathematica
FranckCondonBackground.pdf Read Only Document for Mathcad
FranckCondonComputation.pdf Read Only Document for Mathcad
FranckCondonBackground[nb].pdf Read Only Document for Mathematica
FranckCondonComputation[nb].pdf Read Only Document for Mathematica
FranckCondonComputation.nb Mathematica Computational Document Mathematica
FranckCondonComputation.mcd Mathcad Computational Document Mathcad
JCE Subscribers only: name and password or institutional IP number access required.
Comments to: Theresa Julia Zielinski at tzielins@monmouth.edu and George M. Shalhoub at shalhoub@lasalle.edu.
©Copyright 1998 Journal of Chemical Education | {"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/symmath/app?app_id=29&guest=true","timestamp":"2014-04-17T06:42:56Z","content_type":null,"content_length":"13818","record_id":"<urn:uuid:50afc232-529b-4e1c-b1b8-09de31add50f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
PART 3: Equations of Motion
The equations of motion constitute the core of the mathematical model of flight dynamics. These equations relate the forces acting on the aircraft to its position, velocity, acceleration and
orientation in space. Their derivation is more than an intellectual exercise. The correct interpretation of these equations depends to a very great extent on knowledge of how they were obtained, the
reference frames to which they apply, and the underlying assumptions made along the way.
Simplifying Assumptions
In this model, the aircraft is treated as a rigid body with six degrees of freedom. This is, of course, an idealization of actual flight dynamics, but avoids the complexities that a consideration of
elastic forces and the movement of aircraft parts, such as engine rotors, ailerons, and the like would introduce. Other simplifying assumptions regarding the dynamics of flight have been made to
reduce computational complexity. The assumptions that earth rate is constant and zero and that Coriolis accelerations can be neglected simplify the equations of motion for the aircraft. Control of
the hypothetical aircraft is accomplished by adjusting values of engine thrust, aerodynamic lift, and bank angle. This models an aircraft which uses ailerons, elevators, and engines alone to control
speed, heading and altitude. Aircraft yaw in all flight maneuvers is assumed to be constant and zero. Also ignored are the complexities introduced by considering the effect aircraft shape and motion
have on the external forces which operate on the aircraft. In this model, the aircraft is capable of both translational and rotational motion.
Derivation From First Principles, the Force Equation
The aircraft moves, translationally, under the influence of gravity and of the aerodynamic forces, lift, drag, and thrust. These forces are assumed to act on the aircrafts center of mass. With
respect to the Inertial frame of reference, the total force,
The relationship between force and acceleration, embodied in Equation 3-1, is not expressed in a form well-suited to our purpose. It needs to be expressed relative to a frame of reference in which
the forces acting on the aircraft and the state variables of interest (i.e., velocity, roll, and pitch) assume a simple form. The North-Oriented, Local-Level frame provides just such a frame.
The time rate of change of a vector,
Reapplying Equation 3-3 to the leftmost term of Equation 3-4, the equation
is obtained. The earths spin velocity is constant. Thus,
The aircrafts acceleration relative to the Inertial frame may now be expressed in the ECEF frame as,
Substituting the right hand side of Equation 3-7 for the term,
In this equation the subscripts indicating the frame relative to which the derivatives are taken have been dropped. But it is important to remember that this equation is valid only in the ECEF frame
of reference. We are still short of our objective, which is to relate velocity and force in the NOLL frame of reference. This is accomplished by finding an expression for each term of Equation 3-8
valid in the NOLL frame.
In the VO frame of reference, the aircraft velocity is
This is equivalent to,
Using Equations 2-5 and 3-10, an expression for
The position vector, relative to the NOLL frame, which locates the aircraft is, simply,
The resolution of forces (i.e., weight, lift, drag, and thrust) onto coordinate axes is most easily accomplished in the VO frame of reference, where the aircrafts velocity vector is parallel to the
The vector sum of all the forces acting on the aircraft are given by,
The components of force relative to the NOLL frame are found by applying the inverse of the transformation appearing in Equation 2-9 to Equation 3-14. That is,
is the component of total force normal to the aircrafts velocity vector, and
is the component parallel to
Again applying the relationship expressed in Equation 3-3 and equation,
for the aircrafts position vector expressed in the NOLL frame, the equation,
relating aircraft velocities as observed in the ECEF and the NOLL frames is obtained. The value for
Equating components of the unit vectors
are obtained. Applying Equations 2-7 and 3-3 once again, this time to Equation 3-21, an expression for ,
after evaluation using the kinematic relations of Equations 3-22, 3-23, and 3-24, yields,
All terms of Equation 3-26 are expressed as components along NOLL frame axes. Substitution of Equations 3-11, 3-13, 3-18, and 3-26 into Equation 3-8 yields in a system of simultaneous differential
equations. Solving these equations for
are obtained. These are the equations of motion for the aircraft. Considerable simplification of these equations is possible if the effects of the earths rotation on flight dynamics is neglected. The
effects of earth spin are reflected in those terms of Equations 3-27 through 3-29 which include the earth rate,
Ignoring terms involving
OmniLexer™ v.5.1
A powerful lex-
ical analyzer generator for the Ada, C++, C, Java, Ada, and PL/SQL pro-
gramming languages.
Learn more…
CompilerWriterPro™ v.3.5
This compiler generator pro-
duces SLR1, LR1, and LALR1 parsers from attributed grammars. Learn more…
SimWorkbench™ v.2.7
This collection of utilities and libraries is used to construct navigation and guidance programs. It features a user-definable shell for controlling and monitoring execution of flight programs. Learn
SDDPublisher™ v.2.0
This document generator oper-
ates on program source code to produce detailed Software Design Descriptions conforming to government requirements.
Learn more…
DataGate™ v.1.7
This data migration tool transports legacy, file-based and rel-
ational data to newly designed relational databases
Learn more… | {"url":"http://www.perfectlogic.com/Articles/Avionics/FlightDynamics/FlightPart3.html","timestamp":"2014-04-19T22:06:49Z","content_type":null,"content_length":"28375","record_id":"<urn:uuid:22ce618e-5c85-4f26-8dea-1c51581069e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lithonia Trigonometry Tutor
Find a Lithonia Trigonometry Tutor
...Finished my undergrad as the Assistant for Teaching Assistant Development for the entire campus. After undergrad I scored in the 99th percentile on the GMAT and successfully taught GMAT courses
for the next 3 years. Helped individuals with the math sections of the GRE, SAT, and ACT.
28 Subjects: including trigonometry, calculus, physics, linear algebra
I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT
and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery.
12 Subjects: including trigonometry, statistics, algebra 1, algebra 2
...My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I took one course in Differential Equations and received an A. I also took several other
courses that included Differential Equations in the solution process. When I graduated from college I ...
20 Subjects: including trigonometry, calculus, GRE, GMAT
...I am very analytically inclined and skilled at problem solving, time management, coaching and mentoring others. I am dedicated to helping students attain the skills and study habits needed to
exceed in their formal education and beyond that will be invaluable to them throughout life as they beco...
18 Subjects: including trigonometry, geometry, accounting, ASVAB
...During my student teaching, I was in a fourth grade Reading and Language Arts classroom. During this experience, I taught a Reading Intervention class as well, where I spent individual time
with students to help them build their confidence in reading. I saw drastic improvements in each one of my students.
18 Subjects: including trigonometry, English, reading, ESL/ESOL | {"url":"http://www.purplemath.com/Lithonia_trigonometry_tutors.php","timestamp":"2014-04-20T08:52:00Z","content_type":null,"content_length":"24177","record_id":"<urn:uuid:2a93296e-fd31-443f-b7c6-241e3aca668c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
An easy question with coulomb force
February 21st 2008, 09:25 AM #1
Dec 2007
An easy question with coulomb force
three identical point charges of q are placed at the vertices of an equilateral triangle x cm apart. calculate the force on each charge
is the ans F = (Qq*sqrt(3))/x^2
Yes and no ...... It is true that $F = \sqrt{3} F_{qq}$ where $F_{qq}$ is the magnitude of the force between two identical point charges of q that are x cm apart.
But ....... you've neglected to include the constant of proportionality. So the answer as it stands is not correct. Note also that you're obviously working in units where distance is measured in
cm. Personally, I'd work in either MKS or Gaussian units ......
hai madam i want to study maths pleas teach me maths your faithfull student melingtan
User warned to keep on topic.
February 22nd 2008, 02:58 AM #2
February 22nd 2008, 06:33 AM #3
February 22nd 2008, 03:02 PM #4 | {"url":"http://mathhelpforum.com/advanced-applied-math/28770-easy-question-coulomb-force.html","timestamp":"2014-04-17T22:15:33Z","content_type":null,"content_length":"40038","record_id":"<urn:uuid:167ee422-2b62-4c48-b844-33b8ffcce7ce>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percentage saving in weight of Hollow Shaft vs Solid Shaft
This is very similar to a problem I had in Sept 07. In this case, I needed the smallest hollow shaft required to transmit angular information down a shaft, from one end to the other, under torsional
load, with a limit on angular displacement error. (And it scales as D^-4, yikes!) You're problem is a bit simpler.
I have a polar moment of inertia formula for a hollow round shaft given as
[tex]J = \left( \pi / 32 \right) \left( D^4 - d^4 \right) [/tex]
D, and d are the outside and inside diameters, respectively.
This differs from yours. Why? | {"url":"http://www.physicsforums.com/showthread.php?t=286006","timestamp":"2014-04-20T21:31:51Z","content_type":null,"content_length":"64158","record_id":"<urn:uuid:02fe761a-cb6e-41ca-9911-f573e8b7127f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/juciystar234/asked/1","timestamp":"2014-04-20T06:24:24Z","content_type":null,"content_length":"113175","record_id":"<urn:uuid:64a78030-125f-4b79-a45e-74fac790a4d7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is a sequentially closed cone, closed?
up vote 3 down vote favorite
The following question I also posed here, but still got no answer. Let $X$ be a locally convex, Hausdorff topological vector space and $C\subseteq X$ a convex cone, which is sequentially closed. What
are criteria, that would imply that $C$ is closed (in the topology of X)?Are there also "testifyable" criteria?
fa.functional-analysis gn.general-topology convex-analysis
A side note: currently you seem to have three unregistered accounts. I suggest registering one of them and then merging the others into it. – Yemon Choi Feb 9 '13 at 20:26
add comment
3 Answers
active oldest votes
Since this is too long for a comment, I post it as an answer:
Sorry, but I do not agree with Peter Michor's answer. There are certainly better examples, but this the first I can remember: There are countable inductive limits $X=\lim\limits_{\to} X_n$
of Frechet spaces which are not Hausdorff (take a decreasing sequence of open connected sets $U_n$ in the complex plane with empty intersection and $X_n=H(U_n)$ the Frechet space of
holomorphic functions on $U_n$ together with the injective restriction maps). $X$ is the a quotient of the direct sum $\bigoplus X_n$ which is certainly bornological and the kernel of the
quotient map is sequentially closed (because convergent sequences in the direct sum are located and convergent in some finite sum) but it is not closed because the quotient is not Hausdorff.
up vote The situation is better for metrizable spaces (of course, this is trivial) as well as for so-called Silva spaces (also called LS or DFS-spaces, countable inductive limits of Banach spaces
2 down with compact inclusions): In these cases, sequentially closed subspaces are closed.
By 8.5.28 in the book of Bonet and Perez-Carreras, Barrelled Locally Convex Spaces, even sequentially closed subsets of Silva spaces are closed.
Edit. A simpler example (but possibly less relevant for analytical applications) is the space $X=\mathbb R^I$ endowed with the product topology (point-wise convergence of functions $f:I \to
\mathbb R$) if $I$ is uncountable and of moderate cardinality (e.g. $I=\mathbb R$). Then $X$ is bornological (due to the cardinality restriction) and $L=\lbrace f\in X: \lbrace i\in I: f(i)\
neq 0\rbrace \text{countable}\rbrace$ is sequentially closed and dense in $X$.
So what does barreldness imply on the seminorms generating the topology of the space? – andy teich Feb 14 '13 at 12:46
I do not understand this question. Barreledness is rather close to bornologicity, for instance, every (locally) complete bornological space is ultrabornological and hence barrelled. This
means that barrelledness will not help you very much to conclude closed from sequentially closed. – Jochen Wengenroth Feb 14 '13 at 13:02
...sorry, I meanz bornologicity! – andy teich Feb 14 '13 at 14:35
If you can write down the seminorms "explicitely", what condotions do you have to impose in order that thes space is bornologic? Maybe my question is not well defined... – andy teich Feb
14 '13 at 14:57
As for almost all locally convex properties, bornologicity does not reflect properties of single seminorms. The essential point is always the relation between them or how many of them you
need. A trivial example: If only countably many seminorms describe the locally convex topology the space is (semi-) metrizable and hence bornological. – Jochen Wengenroth Feb 15 '13 at
show 1 more comment
In ordered vector spaces the question is restated as follows: When does the Archimidean property imply that the positive cone of an ordered vector space is closed? So that should help in
your search.
Here is a simple example of a condition involving only the cone.
up vote
1 down Suppose that $X$ is a topological vector space with convex cone (wedge) $X_+$ . Suppose that there is $e\in X_+$ that is an order unit. That is, for each $x\in X$ there is $\lambda >0$
vote satisfying $$ x\in \lambda e -X_+ $$ In this case $X_+$ is sequentially closed if and only if it is closed. This is because $e$ must be an internal point of $X_+$. Supposing that $X_+$ is
sequentially closed; if $x\in \overline{X_+}$, then $\alpha e + (1-\alpha) x$ is an internal point of $X_+$ for all $\alpha\in (0,1)$ because $X_+$ is convex. In particular $$ \frac{1}{n+1}
e + (1- \frac{1}{n+1}) x $$ is an internal point of $X_+$. Thus, $x$ is in $X_+$.
add comment
If $X$ is bornological (carries the finest locally convex topology compatible with the given family of bounded sets, or the the given dual space), then sequentially closed implies
up vote 1 down
vote Edit: As Jochen pointed out, this is wrong. Sorry.
Lieber Herr Michor, do you agree with the counterexample to your claim that I posted as an answer? – Jochen Wengenroth Feb 13 '13 at 15:29
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis gn.general-topology convex-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/121314/when-is-a-sequentially-closed-cone-closed","timestamp":"2014-04-21T12:41:53Z","content_type":null,"content_length":"67136","record_id":"<urn:uuid:e039c97e-4863-4c22-ab1f-48b29f4a763b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fraction Tests, Diagnostics Tsts for Fractions
Mixed Operations and Compare the Fractions (Answers on 2nd page of PDF.
These tests can be used to as a summative test or as a diagnostic to determine the level of understanding students have. There are a variety of concepts within the test and students should have had
exposure to all operations (multiply, divide, add and subtract fractions as well as understanding common demoninators).
Fractions are one of the most difficult concepts for students to understand. Always try to make fractions concrete before moving to the paper/pencil version. | {"url":"http://math.about.com/od/fractionsrounding1/ss/FractionTest.htm","timestamp":"2014-04-21T02:03:32Z","content_type":null,"content_length":"44505","record_id":"<urn:uuid:e23402c4-b8e0-4cbc-bd8c-d8b1015f7438>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
File Formats
Text File Formats
We briefly describe the ASCII file formats for matrices redistributed by the Matrix Market :
Note that most of the data files we distribute are compressed using gzip, and some are multifile archives based on Unix tar. Refer to our compression document if you need help in decoding these
Matrix Market Exchange Formats
This is the native exchange format for the Matrix Market. We provide only a brief overview of this format on this page; a complete description is provided in the paper The Matrix Market Formats:
Initial Design [Gziped PostScript, 51 Kbytes] [PostScript, 189 Kbytes].
The Matrix Market (MM) exchange formats provide a simple mechanism to facilitate the exchange of matrix data. In particular, the objective has been to define a minimal base ASCII file format which
can be very easily explained and parsed, but can easily adapted to applications with a more rigid structure, or extended to related data objects. The MM exchange format for matrices is really a
collection of affiliated formats which share design elements. In our initial specification, two matrix formats are defined.
• Coordinate Format
A file format suitable for representing general sparse matrices. Only nonzero entries are provided, and the coordinates of each nonzero entry is given explicitly. This is illustrated in the
example below.
• Array Format
A file format suitable for representing general dense matrices. All entries are provided in a pre-defined (column-oriented) order.
Several instances of each of these basic formats are defined. These are obtained by specifying an arithmetic field for the matrix entries (i.e., real, complex, integer, pattern) and a symmetry
structure which may reduce the size of the data file (i.e. general, symmetric, skew-symmetric, Hermitian) by storing nonzero entries only on or below the main diagonal.
MM coordinate format is suitable for representing sparse matrices. Only nonzero entries need be encoded, and the coordinates of each are given explicitly. This is illustrated in the following example
of a real 5x5 general sparse matrix.
0 10.5 0 0 0
0 0 .015 0 0
0 250.5 0 -280 33.32
In MM coordinate format this could be represented as follows.
%%MatrixMarket matrix coordinate real general
% This ASCII file represents a sparse MxN matrix with L
% nonzeros in the following Matrix Market format:
% +----------------------------------------------+
% |%%MatrixMarket matrix coordinate real general | <--- header line
% |% | <--+
% |% comments | |-- 0 or more comment lines
% |% | <--+
% | M N L | <--- rows, columns, entries
% | I1 J1 A(I1, J1) | <--+
% | I2 J2 A(I2, J2) | |
% | I3 J3 A(I3, J3) | |-- L lines
% | . . . | |
% | IL JL A(IL, JL) | <--+
% +----------------------------------------------+
% Indices are 1-based, i.e. A(1,1) is the first element.
1 1 1.000e+00
2 2 1.050e+01
3 3 1.500e-02
1 4 6.000e+00
4 2 2.505e+02
4 4 -2.800e+02
4 5 3.332e+01
5 5 1.200e+01
The first line contains the type code. In this example, it indicates that the object being represented is a matrix in coordinate format and that the numeric data following is real and represented in
general form. (By general we mean that the matrix format is not taking advantage of any symmetry properties.)
Variants of the coordinate format are defined for matrices with complex and integer entries, as well as for those in which only the position of the nonzero entries is prescribed (pattern matrices).
(These would be indicated by changing real to complex, integer, or pattern, respectively, on the header line). Additional variants are defined for cases in which symmetries can be used to
significantly reduce the size of the data: symmetric, skew-symmetric and Hermitian. In these cases, only entries in the lower triangular portion need be supplied. In the skew-symmetric case the
diagonal entries are zero, and hence they too are omitted. (These would be indicated by changing general to symmetric, skew-symmetric, or hermitian, respectively, on the header line).
The following software packages are available to aid in reading and writing matrices in Matrix Market format.
Harwell-Boeing Exchange Format
The Harwell-Boeing format is the most popular mechanism for text-file exchange of sparse matrix data. The following information, taken from User's Guide for the Harwell-Boeing Sparse Matrix
Collection provides a specification for this format.
Matrix data is held in an 80-column, fixed-length format for portability. Each matrix begins with a multiple line header block, which is followed by two, three, or four data blocks. The header block
contains summary information on the storage formats and space requirements. From the header block alone, the user can determine how much space will be required to store the matrix. Information on the
size of the representation in lines is given for ease in skipping past unwanted data.
If there are no right-hand-side vectors, the matrix has a four-line header block followed by two or three data blocks containing, in order, the column (or element) start pointers, the row (or
variable) indices, and the numerical values. If right-hand sides are present, there is a fifth line in the header block and a fourth data block containing the right-hand side(s). The blocks
containing the numerical values and right-hand side(s) are optional. The right-hand side(s) can be present only when the numerical values are present. If right-hand sides are present, then vectors
for starting guesses and the solution can also be present; if so, they appear as separate full arrays in the right-hand side block following the right-hand side vector(s).
The first line contains the 72-character title and the 8-character identifier by which the matrix is referenced in our documentation. The second line contains the number of lines for each of the
following data blocks as well as the total number of lines, excluding the header block. The third line contains a three character string denoting the matrix type as well as the number of rows,
columns (or elements), entries, and, in the case of unassembled matrices, the total number of entries in elemental matrices. The fourth line contains the variable Fortran formats for the following
data blocks. The fifth line is present only if there are right-hand sides. It contains a one character string denoting the storage format for the right-hand sides as well as the number of right-hand
sides, and the number of row index entries (for the assembled case). The exact format is given by the following, where the names of the Fortran variables in the subsequent programs are given in
Line 1 (A72,A8)
Col. 1 - 72 Title (TITLE)
Col. 73 - 80 Key (KEY)
Line 2 (5I14)
Col. 1 - 14 Total number of lines excluding header (TOTCRD)
Col. 15 - 28 Number of lines for pointers (PTRCRD)
Col. 29 - 42 Number of lines for row (or variable) indices (INDCRD)
Col. 43 - 56 Number of lines for numerical values (VALCRD)
Col. 57 - 70 Number of lines for right-hand sides (RHSCRD)
(including starting guesses and solution vectors if present)
(zero indicates no right-hand side data is present)
Line 3 (A3, 11X, 4I14)
Col. 1 - 3 Matrix type (see below) (MXTYPE)
Col. 15 - 28 Number of rows (or variables) (NROW)
Col. 29 - 42 Number of columns (or elements) (NCOL)
Col. 43 - 56 Number of row (or variable) indices (NNZERO)
(equal to number of entries for assembled matrices)
Col. 57 - 70 Number of elemental matrix entries (NELTVL)
(zero in the case of assembled matrices)
Line 4 (2A16, 2A20)
Col. 1 - 16 Format for pointers (PTRFMT)
Col. 17 - 32 Format for row (or variable) indices (INDFMT)
Col. 33 - 52 Format for numerical values of coefficient matrix (VALFMT)
Col. 53 - 72 Format for numerical values of right-hand sides (RHSFMT)
Line 5 (A3, 11X, 2I14) Only present if there are right-hand sides present
Col. 1 Right-hand side type:
F for full storage or
M for same format as matrix
Col. 2 G if a starting vector(s) (Guess) is supplied. (RHSTYP)
Col. 3 X if an exact solution vector(s) is supplied.
Col. 15 - 28 Number of right-hand sides (NRHS)
Col. 29 - 42 Number of row indices (NRHSIX)
(ignored in case of unassembled matrices)
Note: For matrices in elemental form, the leading two dimensions in the header give the number of variables in the finite element application and the number of elements. It is common that not all of
the variables in the application appear in the linear algebra subproblem; hence the matrix represented can be of lower order than the first parameter, described as the "number of variables (NROW)".
The finite element variables are numbered from 1 to NROW, but only the subset of variables that actually appear in the list of variables for the elements define the rows and columns of the matrix.
The actual order of the square matrix cannot be determined until all of the indices are read.
The three character type field on line 3 describes the matrix type. The following table lists the permitted values for each of the three characters. As an example of the type field, RSA denotes that
the matrix is real, symmetric, and assembled.
First Character:
R Real matrix
C Complex matrix
P Pattern only (no numerical values supplied)
Second Character:
S Symmetric
U Unsymmetric
H Hermitian
Z Skew symmetric
R Rectangular
Third Character:
A Assembled
E Elemental matrices (unassembled)
Example Fortran Code for Reading Harwell-Boeing Files
To formalize the logical block structure of the data, we have included two pieces of sample FORTRAN code for reading a matrix in the format of the sparse matrix test collection. Both codes assume the
data comes from input unit LUNIT. Neither is a complete code. Real code should include error checking to ensure that the target arrays into which the data are read are large enough. The design allows
the arrays to be read by a separate subroutine that can avoid the use of possibly inefficient implicit DO-loops.
The code above outlines the structure of the data. The interpretation of the row (or variable) index arrays will require knowledge of the matrix and right-hand side types, as read in this code.
Matlab Procedures for Reading/Writing Harwell-Boeing Files
The developers of the NEP matrix collection have provided a Matlab m-file to write a Matlab sparse matrix in Harwell-Boeing format. A version for complex matrices is also available.
The Berkeley Benchmarking and Optimization (BeBOP) Group has developed a library and standalone utility for converting between Harwell-Boeing, Matrix Market, and MATLAB sparse matrix formats.
Note: This format is being phased out.
The coordinate text format provides a simple and portable method to exchange sparse matrices. Any language or computer system that understands ASCII text can read this file format with a simple read
loop. This makes this data accessible not only to users in the Fortran community, but also developers using C, C++, Pascal, or Basic environments.
In coordinate text file format the first line lists three integers: the number of rows m, columns n, and nonzeros nz in the matrix. The nonzero matrix elements are then listed, one per line, by
specifying row index i, column index j, and the value a(i,j), in that order. For example,
m m nz
i1 j1 val1
i2 j2 val2
i3 j3 val3
. . .
. . .
. . .
inz jnz valnz
White space is not significant, (i.e. a fixed column is not used). The nonzero values may be in either in fixed or floating point representation, to any precision (although Fortran and C typically
parse less than 20 significant digits). For example, the following are each acceptable: 3, 3.141, +3.1415626536E000, 3.1e0.
Experiments show that these coordinate files are approximately 30% larger than corresponding Harwell-Boeing files. Versions compressed with Unix compress or gzip typically exhibits similar ratios.
To represent only structure information of a sparse matrix, a single zero can be placed in the value position, e.g.
M N nz
i1 j1 0
i2 j2 0
i3 j3 0
. . .
. . .
. . .
inz jnz 0
Although more efficient schemes are available, this allows the same routine to read both types of files. The addition of a single byte to each line of the file is typically of little consequence.
Note that there is no implied order for the matrix elements. This allows one to write simple print routines which traverse the sparse matrix in whatever natural order given by the particular storage
Also note that no annotations are used for storing matrices with special structure. (This keeps the parsing routines simple.) Symmetric matrices can be represented by only their upper or lower
triangular portions, but the file format reveals just that --- the reading program sees only a triangular matrix. (The application is responsible for reinterpreting this.)
A MATLAB function (M-file) is available which reads a matrix in coordinate text file format and creates a sparse matrix is available.
The Matrix Market is a service of the Mathematical and Computational Sciences Division / Information Technology Laboratory / National Institute of Standards and Technology
[ Home ] [ Search ] [ Browse ] [ Resources ]
Last change in this page : 14 August 2013. [ ]. | {"url":"http://math.nist.gov/MatrixMarket/formats.html","timestamp":"2014-04-18T23:23:02Z","content_type":null,"content_length":"21877","record_id":"<urn:uuid:9b1488fb-77aa-411b-acce-e599af45cb30>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items by Di Francesco, Marco
Number of items: 16.
Francesco, M. D. and Fagioli, S., 2013. Measure solutions for non-local interaction PDEs with two species. Nonlinearity, 26 (10), pp. 2777-2808.
Di Francesco, M. and Matthes, D., 2013. Curves of steepest descent are entropy solutions for a class of degenerate convection-diffusion equations. Calculus of Variations and Partial Differential
Equations, n/a, n/a.
Burger, M., Di Francesco, M. and Franek, M., 2013. Stationary states of quadratic diffusion equations with long-range attraction. Communications in Mathematical Sciences, 11 (3), pp. 709-738.
Carrillo, J.A., Di Francesco, M., Figalli, A., Laurent, T. and Slepčev, D., 2012. Confinement in nonlocal interaction equations. Nonlinear Analysis: Theory Methods & Applications, 75 (2), pp.
Amadori, D. and Di Francesco, M., 2012. The one-dimensional Hughes model for pedestrian flow : Riemann-type solutions. Acta Mathematica Scientia, 32 (1), pp. 367-379.
Di Francesco, M. and Twarogowska, M., 2011. Asymptotic stability of constant steady states for a 2 2 reaction-diffusion system arising in cancer modelling. Mathematical and Computer Modelling, 53
(7-8), pp. 1457-1468.
Carrillo, J.A., Di Francesco, M., Figalli, A., Laurent, T. and Slepčev, D., 2011. Global-in-time weak measure solutions and finite-time aggregation for nonlocal interaction equations. Duke
Mathematical Journal, 156 (2), pp. 229-271.
Di Francesco, M., Markowich, P.A., Pietschmann, J.-F. and Wolfram, M.-T., 2011. On the Hughes' model for pedestrian flow: the one-dimensional case. Journal of Differential Equations, 250 (3), pp.
Di Francesco, M., Lorz, A. and Markowich, P., 2010. Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion : Global existence and asymptotic behavior. Discrete and Continuous
Dynamical Systems, 28 (4), pp. 1437-1453.
Burger, M., Di Francesco, M., Pietschmann, J.-F. and Schlake, B., 2010. Nonlinear cross-diffusion with size exclusion. SIAM Journal on Mathematical Analysis (SIMA), 42 (6), pp. 2842-2871.
Di Francesco, M. and Donatelli, D., 2010. Singular convergence of nonlinear hyperbolic chemotaxis systems to keller-segel type models. Discrete and Continuous Dynamical Systems - Series B, 13 (1),
pp. 79-100.
Di Francesco, M., Markowich, P.A. and Fellner, K., 2008. The entropy dissipation method for spatially inhomogeneous reaction-diffusion-type systems. Proceedings of the Royal Society of London Series
A - Mathematical Physical and Engineering Sciences, 464 (2100), pp. 3273-3300.
Burger, M. and Di Francesco, M., 2008. Large time behavior of nonlocal aggregation models with nonlinear diffusion. Networks and Heterogeneous Media, 3 (4), pp. 749-785.
Di Francesco, M. and Rosado, J., 2008. Fully parabolic Keller-Segel model for chemotaxis with prevention of overcrowding. Nonlinearity, 21 (11), pp. 2715-2730.
Di Francesco, M. and Wunsch, M., 2008. Large time behavior in Wasserstein spaces and relative entropy for bipolar drift-diffusion-Poisson models. Monatshefte fur Mathematik, 154 (1), pp. 39-50.
Di Francesco, M., Fellner, K. and Liu, H., 2008. A nonlocal conservation law with nonlinear "radiation" inhomogeneity. Journal of Hyperbolic Differential Equations, 5 (1), pp. 1-23. | {"url":"http://opus.bath.ac.uk/view/person_id/7162.html","timestamp":"2014-04-17T15:39:45Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:6a8ad115-810c-48d9-87b5-8349b357f61e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: How do you drop the variable -(e)- from the data?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: How do you drop the variable -(e)- from the data?
From wgould@stata.com (William Gould, Stata)
To statalist@hsphsun2.harvard.edu
Subject Re: st: How do you drop the variable -(e)- from the data?
Date Wed, 05 Nov 2003 09:37:00 -0600
Roger Newson <roger.newson@kcl.ac.uk> noticed that if, after estimation,
he types -list *-, in addition to all the expected variables, a variable
named "(e)" also appears in the output. He writes,
> I am having a problem with the variable whose name is (e), which appears to
> be generated whenever an estimation command is executed, and which contains
> the results of the function -e(sample)-.
It is a bug that Roger ever saw the variable "(e)", so let me explain:
1. Roger is right: Variable "(e)" has to do with e(sample) and, in
fact, is e(sample).
2. The existance of variable "(e)" was supposed to be completely hidden.
Had we done that right, I would not now be writing this email.
3. There is no bug except that Roger saw the variable "(e)" (and
found some other ways to access it).
So we will fix that bug but, until we do, it is not a bug that should bother
For those who are curious, here is what "(e)" is about:
T1. When you run an estimation command, Stata needs to store e(sample) --
the function that identifies which observations were used. That
information is stored in the dataset in the secret variable named
T2. The name "(e)" (note the parens) was chosen carefully to be an
invalid name. It should not surprise you that inside Stata, we have
the ability to create variables named anything we want. We chose an
invalid name so that it would never conflict with a valid name a user
might want to create. In addition, an invalid name would be rejected
by the parser and so make it more difficult that any user would ever
discover the secret variable.
T3. When you -save- a datwaset, variable "(e)" is *NOT* stored in the
dataset. Stata knows to skip that variable. More correctly, variable
"(e)" is not stored unless you specify -save-'s -all- option. As it
says in the on-line help, "-all- is for use by programmers. If
specified, e(sample) will be saved with the dataset. You could run a
regression, -save mydata, all-, -use mydata-, and -predict yhat if
T4. The variable "(e)" is dropped (1) whenever a new estimation command
is run (in which case a new "(e)" is created), and (2) whenever
you type -discard- (which eliminates previous estimation results),
and (3) whenever a -drop- command results in a dataset that contains
only "(e)".
So what happened? Where did we go wrong? In fact, "(e)" has been in Stata
for sometime without anyone knowing, but when we added fancier pattern
matching for varlists (so that you can type things like "*e*", something that
used not to be allowed), we forgot to exclude "(e)", and that opened to the
door to Roger's discovery.
It was just as Nick Cox <n.j.cox@durham.ac.uk> suspected: "This raises the
question of whether it's been there for ages, or it's only recently become
visible as a result of some other change in Stata."
-- Bill
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2003-11/msg00144.html","timestamp":"2014-04-20T21:25:05Z","content_type":null,"content_length":"8225","record_id":"<urn:uuid:6cf3ec82-dae8-48a6-8f4c-b81049b690eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Detecting the point of collision between a rotating point and a rotating line.
October 3rd 2008, 08:05 AM #1
Oct 2008
Detecting the point of collision between a rotating point and a rotating line.
I'm trying to formulate an algorithm that will calculate the point where an independently rotating point collides with an independently rotating line.This will only concern instances where the
rotating point, which will be called P1 from now on, and the rotating line, which will be called L1 from now on, are rotating in a two-dimensional plane where the axes of rotation for both P1 and
L1 are parallel. L1 will also be expressed as two rotating points, P2 and P3, around the same point of rotation.
Both the systems associated with P1 and L1 will have the following known values:
w - The angular velocity of each system, expressed in degrees/frame.
r - The distance to each system's point of rotation, expressed in pixels.
(cx,cy) - The coordinates of the point of each system that is being rotated around
(P1.x,P1.y)... - The coordinates of P1,P2, and P3 when the systems are at their final state.
O° - The angle to the rotating point, P1, from C1, which is the point of rotation in the system that deals with P1. It is also the angle to the point on the rotating line, P2, from C2, which is
the point of rotation in the system that deals with L1.
w2/w1 - The ratio of the angular velocities of each system. It will be assumed that w1 > 0. w2 is the angular velocity of the system that deals with L1, and w1 is the angular velocity of the
system that deals with P1.
The point where P1 collides with L1 is where the angle to P1 from P2 is the same as the angle to P3 from P2. Here's an example to further explain what I mean.
* cw = clockwise and will be expressed in negative degrees
* ccw = counter-clockwise and will be expressed in positive degrees
There are three pictures of two objects rotating. The red circle on the left shows the system that deals with P1, and the green circle on the right shows the system that deals with L1. L1 is the
line segment that connects P2 and P3 together. The initial state shows the starting position of each point before they are rotated. The intermediate state shows the position of P1 and L1 after
they each have half of their angular velocities added to the angle to each point from the point of rotation of each system (C1 and C2). This is also where P1 intersects L1, as the angle to P1
from P2 is the same as the angle to P3 from P2, which is 0°. The final state is what is shows after the total angular velocity is added to each system. As you can see, P1's orientation is 180°
different from its initial state, and P2's and P3's orientations are 90° different from their initial states. Also, the coordinate system in this example mimics the coordinate system found in
most computer programs, where the top-left corner is the origin and the value of the y-axis increases as you move downwards.
The only state the program will be able to detect is the final state, and since all of the factors that dictate where the point of collision occurs are known, it should be possible to find a
function to give that point. A way to find that point is to simply know how much, in degrees, that P1 needs to "back up" in order to be in the orientation where it collides with L1. This value
will now be known as F°.
In the above example F° = 90°,as that is the amount needed to "back up" P1 to be in the same position as it is in the intermediate state where the collision happened. To find the corresponding
angle for L1, just multiply the ratio of the angular velocities of each system by F°. In the above example, the ratio is -1/2 (w2/w1 = 90/-180 = -1/2), so the corresponding angle would be -45°
(w2*F°/w1 = 90*90/-180 = -45°).
The problem is finding a way to find F°. The means to find F° must include each systems' angular velocities(w1 and w2), position of the points of rotation(C1 and C2), final state orientation (O1°
and O2°), radius from P1 and P2 to the points of rotation (r1 and r2). Another value, the offset added to the angle to P2 from C2 to find the angle to P3 from P2, which will be known as K°, must
also be known. In the final state in the above example, K° = 180° as you need to add 180° to the angle to P2 from C2 to get the angle to P3 from P2 (O2° = 225°, angle to P3 from P2 = 45°, 225° -
45° = 180°. To check if this is right, O2 + K = angle to P3 from P2, 225° + 180 = 405° - (360°) = 45°, so this is right).
The position that P1 will be when the collision occurs will be the ordered pair (x1, y1), and the position that P2 will be when the collision occurs will be the ordered pair (x2, y2). The angle
between these two points must be the same as the angle to P3 from P2 when the collision occurs, which can be expressed as (O2 + w2*F°/w1) + K°. (O2 + w2*F°/w1) will find what O2 was when the
collision occurred, and adding the offset, K°, to that value will get the angle to P3 from P2 when the collision occurred.
To find the angle between two points, you must subtract the coordinates of the point that will be used as the origin, P2, from the point you want to find the angle to, P1, and then use these
values in the inverse of the tangent function. These two values will be known as simply x and y.
To clarify...
arctan(-y/x) = (O2 + w2*F°/w1) + K
y = y1 - y2
x = x1 - x2
(y is multiplied by -1 to correct the fact that we are working in a computer-based coordinate system)
The arctan function is not accurate at all times, and some rules needed to be combined with the result in order to get the correct angle between the two coordinates. In the above example,
however, these rules do not need to be applied.
(x1,y1) and (x2,y2) can be expressed as follows:
x1 = cos(O1 + F°) * r1 + C1x
y1 = -sin(O1 + F°) * r1 + C1y
x2 = cos(O2 + w2*F°/w1) * r2 + C2x
y2 = -sin(O2 + w2*F°/w1) * r2 + C2y
(sin is multiplied by -1 to correct the fact that we are working in a computer-based coordinate system)
(C1x means the x-coordinate of C1, C1y means the y-coordinate of C1, etc..)
Using the above example, we will use this method to show that P1 and L1 collide when we substitute the variables needed with the values found in the final stage and with the already-known
information that F° = 90°.
Using the information given in the final stage...
O1 = 270°, O2 = 225°
w1 = -180°, w2 = 90°
r1 = 50, r2 = 50
C1 = (50,50), C2 = (137.5,50)
angle to P3 from P2 = 45°
K° = O2 - angle to P3 from P2 = 180°
F° = 90 (Already known given the example. In other problems this value is not known but needs to be found. The method to find this is unknown, and it is the reason I'm posting all of this in the
first place.)
Using this information to see if it works...
x1 = cos(O1 + F°) * r1 + C1x = cos(270° + 90) * 50 + 50 = 100
y1 = -sin(O1 + F°) * r1 + C1y = -sin(270° + 90) * 50 + 50 = 50
x2 = cos(O2 + w2*F°/w1) * r2 + C2x = cos(225° + 90*90°/-180) * 50 + 137.5 = 87.5
y1 = -sin(O2 + w2*F°/w1) * r2 + C2y = -sin(225° + 90*90°/-180) * 50 + 50 = 50
x = x1 - x2 = 100 - 87.5 = 12.5
y = y1 - y2 = 50 - 50 = 0
O2 + w2*F°/w1 + K = 225° + 90*90°/-180 + 180° = 360° = 0°
arctan(-y/x) = arctan(0) = 0°
arctan(-y/x) = O2 + w2*F°/w1 + K = 0°
Since the above statement is true, P1 collides with L1 when P1 is "backed up" by 90° and when P2 is "backed up" by -45° (w2*F°/w1 = 90*90°/-180 = -45°).
The problem is that I don't know how to turn the equation (arctan(-y/x) = O2 + w2*F°/w1 + K = 0°) into a function that will return F°. F° is found on both sides of the equation, and when you
factor in the rules that need to be applied to arctan to find the real angle when given any two points, it seems pretty impossible.
I realize that no one will read this whole thing completely understanding what I'm saying, and I apologize sincerely for this. I've tried very hard to express this problem in the clearest way
that I could, but I don't think I'll get an answer anytime soon. If you actually know what I'm talking about, please let me know how you would approach this problem. If you know some genius that
can solve any mathematical problem, please let me know this person so that I can contact him/her. If you know of a forum that exists somewhere else on the internet that handles problems like
this, it would be very nice to show me the link to such a place. Thank you in advance.
numerical solution
It looks to me as if you won't find an analytical solution to your resulting equation. However, a numerical solution should be straightforward to implement. You could use a Newton-Raphson scheme
or a simple bisection method (easy to find online).
October 7th 2008, 01:18 AM #2
Oct 2008 | {"url":"http://mathhelpforum.com/advanced-applied-math/51864-detecting-point-collision-between-rotating-point-rotating-line.html","timestamp":"2014-04-16T05:27:41Z","content_type":null,"content_length":"40774","record_id":"<urn:uuid:d11b025e-5f35-4b50-8d10-2d6a2a884618>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem related to permutation and combination.
March 13th 2009, 11:13 PM #1
Junior Member
Mar 2009
Problem related to permutation and combination.
Can anyone suggest me?
In how many ways we can arrange three digits say 1,2,3 in a row of six so that no two same digits are together?
Any help is appreciated.
Construct a diagram with your row of six as underscores
___ X ___ X ___ X ___ X ___ X ___
The first underscore can have any of the three integers
3 X ____ X ____ X ____ X ____ X ____
The second underscore can have any of the remaining two integers
3 X 2 X ____ X ____ X ____ X ____
The remaining underscore can be filled with the integer 2, since the possibilities are the integers that are not the one integer to the left of it.
so 3 X 2 X 2 X 2 X 2 X 2 = 3 * 2^5 = 96
You have three choices for the first digit. For the next digit, there are only two choices, because the second digit must be different than the first. Similarly for the third, and the fourth, and
so on. Thus, the total number of arrangements is
Thanks for the quick reply.
Actually I am trying to solve the below question?
Three variant of GMAT paper are to be given to 12 students(so that each variant is used for 4 students) and In how many ways can the student be placed in the two rows of six each so that there
should be no identical variant side by side and the student sitting one behind the other should have the same variant?
kindly help me out in this?
There is little modification in the question mentioned in the first post
sorry I missed that earlier.
In how many ways we can arrange 1,2,3,1,2,3 in a row of six so that no two same digits are together?
I think we can do in the manner mentioned below.
We can select the first digit in the 3 ways (from 1,2,3)
This can be done in 3 ways as mentioned above.let's say it is 1
1 _ _ _ _ _
Now we have to select the next digit from 2,3 as per the condition.
this can be done in 2 ways.let's say it is 2.
1 2 _ _ _ _
we have to select the third digit either from 1 or 3 that is again selected in the 2 ways.
let's say third digit is 1.
now our number becomes.
1 2 1 _ _ _
For 4th digit we have the choice of either 2 or 3 but we cannot select 2 as if we select 2 then two consecutive 3 are placed in last two position which is contrary to what is required.
so the 4 digit is selected in 1 way
1 2 1 3 _ _
similarly the fifth and the last position.
Hence the total number of ways of selecting the numbers are
3 *2*2*1*1*1 = 12 ways
Please let me know if the way I am thinking is right or not.
suggestions are welcome.
Three variant of GMAT paper are to be given to 12 students(so that each variant is used for 4 students) and In how many ways can the student be placed in the two rows of six each so that there
should be no identical variant side by side and the student sitting one behind the other should have the same variant?
If you look at the part above that I highlighted, you will realize that there is no solution as stated.
It may be that “should” is actually “could”.
For this new modification in the question, that is correct.
Hello everyone,
I am stuck in the below question so need assistance.
Three variant of GMAT paper are to be given to 12 students(so that each variant is used for 4 students) and In how many ways can the student be placed in the two rows of six each so that there
should be no identical variant side by side and the student sitting one behind the other should have the same variant?
I like using this calculator for working out permutations and calculations.
Combinations and Permutations Calculator
Combinatorics question
Hello a69356
Hello everyone,
I am stuck in the below question so need assistance.
Three variant of GMAT paper are to be given to 12 students(so that each variant is used for 4 students) and In how many ways can the student be placed in the two rows of six each so that there
should be no identical variant side by side and the student sitting one behind the other should have the same variant?
Call the three variants A, B and C. Work out first the number of ways in which these three variants can be placed in the twelve seats, and then, for each selection, arrange each group of 4
students within their 4 seats.
We note that row 2 will be identical to row 1 as far as the position of the variant papers is concerned, since equal variants must be placed one behind the other.
So we may choose 2 seats from the 6 for the variant A papers in $\binom{6}{2}$ = 15 ways. Of these, $\binom{5}{1}$ = 5 will have two seats side-by-side. So the places for variant A may be chosen
in 10 ways. In 5 of these, there will be 3 or 4 adjacent seats left vacant - see below, where these are marked with (*):
A . A . . . (*)
A . . A . .
A . . . A . (*)
A . . . . A (*)
. A . A . .
. A . . A .
. A . . . A (*)
. . A . A .
. . A . . A
. . . A . A (*)
In each of the five marked (*), there are 2 ways of placing papers B and C so that no two of the same variant are adjacent. In the remaining 5, there are 4 ways of placing B & C. So the total
number of ways of placing all three sets of papers is 5 x 2 + 5 x 4 = 30.
Having chosen which seats are to be occupied by each variant, there are then 4! ways of arranging the 4 variant A students in their seats; 4! for the B's and 4! for the C's.
So the total number of ways of seating the students altogether is 30 x 4! x 4! x 4! = 414720.
Grandad, I admit that the wording of this problem mystifies me.
I do not follow your solution. Does it allow the arrangement?
Clearly valid if “should” becomes “could”.
Combinatorics question
Hello Plato
Clearly, there are two posts that have been merged here. The OP amplified his first question (in which he talked about arranging 1, 1, 2, 2, 3, 3 in a line with no two identical digits adjacent
to one another) into the question about GMAT, in which he introduced the wording that you queried:
So, to answer your question above: No, on two counts:
□ Because you cannot have two identical variants adjacent to one another in the same row. Even if you replace 'should' by 'could' this still rules out AAAABB.
□ Because I interpret the word 'should' to mean 'must'. 'This student should sit here' means, to me, 'This student is obliged to sit here'. So in row 2, a student with a variant A paper should
(i.e. is obliged to) sit behind another student with an A paper. This interpretation is, I think, confirmed by the OP's initial problem (which I have only just seen) in which he refers to
just six items in one row.
(The idea of these seating requirements is presumably to reduce the possibility of students cheating.)
Since there are only 30 arrangements, it is relatively simple to list them. I do so here, using the method outlined in my first posting, where I place the A's first and then arrange the B's and
C's in the remaining places. I have adopted the strategy of placing an A in the first available space on the left, and then working from left to right to find all possible positions of the second
A. Then, within each of these 'A' selections, I have placed the B's again working from left to right to find all possible positions.
The ordered pairs above each group indicate the positions occupied by the A's, and * indicates A's in a pattern that leaves only two possibilities for B & C, the others yielding four possible
positions for B & C (as per my first posting).
(1, 3)
ABACBC *
ACABCB *
(1, 4)
(1, 5)
ABCBAC *
ACBCAB *
(1, 6)
ABCBCA *
ACBCBA *
(2, 4)
(2, 5)
(2, 6)
BACBCA *
CABCBA *
(3, 5)
(3, 6)
(4, 6)
BCBACA *
CBCABA *
Well I think that I now see the confusion.
In the schools of US, I think it is safe to say that:
would be considered six rows of two each whereas
A B
A B
A C
A C
B C
B C
would be considered two rows of six each.
I assumed he was dealing with the test given for graduate business school admission in this country.
What is the GMAT?
Here is what I know as the GMAT: Prep smarter, score higher—guaranteed, or your money back!.
I would like to see the actual question.
March 13th 2009, 11:33 PM #2
March 13th 2009, 11:40 PM #3
March 14th 2009, 12:04 AM #4
Junior Member
Mar 2009
March 16th 2009, 02:19 AM #5
Junior Member
Mar 2009
March 16th 2009, 04:04 AM #6
March 17th 2009, 11:51 AM #7
Junior Member
Jan 2009
March 17th 2009, 12:19 PM #8
March 17th 2009, 03:10 PM #9
March 18th 2009, 12:09 AM #10
March 18th 2009, 02:52 AM #11
March 18th 2009, 06:56 AM #12
March 18th 2009, 08:12 AM #13 | {"url":"http://mathhelpforum.com/discrete-math/78599-problem-related-permutation-combination.html","timestamp":"2014-04-18T19:23:10Z","content_type":null,"content_length":"82772","record_id":"<urn:uuid:c2cbdef3-1d14-41ae-bc83-39052df8db25>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting Non-Terminating Repeating Decimals into Fractions
Step 1. Set the given number equal to n.
Step 2. Multiply one or both equations by the power of ten that will move the decimal point the number of places needed to line the numbers up in a "matching" way.
Step 3. Subtract the two equations that you have now.
Step 4. Isolate n by division.
Directions: Please convert the following repeating decimal into a fraction.
Step 1.
Step 2.
We will multiply n by 100 to line up the numbers appropriately.
Step 3.
Step 4.
Directions: Please convert the following repeating decimals into a fraction. | {"url":"http://www.mathvizza.com/notes/non_term_to_frac/converting_non_term_dec.htm","timestamp":"2014-04-18T20:44:13Z","content_type":null,"content_length":"6958","record_id":"<urn:uuid:ede50e0d-5aca-469a-8fb5-4cccdd72a5d9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Journal of Statistical Physics, Vol. 90, Nos. 3/4, 1998
First-Order Phase Transitions in One-Dimensional
Steady States
Peter F. Arndt,1'2 Thomas Heinzel,1 2 and VladimirRittenberg1-2
Received July 31, 1997:final November 6, 1997
The steady states of the two-species (positive and negative particles) asymmetric
exclusion model of Evans, Foster, Godreche, and Mukamel are studied using
Monte Carlo simulations. We show that mean-field theory does not give the
correct phase diagram. On the first-order phase transition line which separates
the CP-symmetric phase from the broken phase, the density profiles can be
understood through an unexpected pattern of shocks. In the broken phase the
free energy functional is not a convex function, but looks like a standard
Ginzburg-Landau picture. If a symmetry-breaking term is introduced in the
boundaries, the Ginzburg-Landau picture remains and one obtains spinodal
points. The spectrum of the Hamiltonian associated with the master equation
was studied using numerical diagonalization. There are massless excitations on
the first-order phase transition fine with a dynamical critical exponent z = 2, as
expected from the existence of shocks, and at the spinodal points, where we find
: = 1. It is the first time that this value, which characterizes conformalinvariant
equilibrium problems, appears in stochastic processes. | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/068/2716433.html","timestamp":"2014-04-20T23:44:04Z","content_type":null,"content_length":"8589","record_id":"<urn:uuid:680d3e08-e108-4855-a246-e687a6b51805>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: note option in the twoway graph going nuts
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: note option in the twoway graph going nuts
From "Alexand Shepotilo" <shepotil@econ.bsos.umd.edu>
To <statalist@hsphsun2.harvard.edu>
Subject st: note option in the twoway graph going nuts
Date Fri, 05 Nov 2004 13:52:21 -0500
Dear statalist users,
I want to place a long note at the bottom of the graph. As said in the
help file, I wrote
note ("First" "Second" "Third"
Stata says "unmatched quote"
If I try to fit the note in two lines
note ("First" "Second")
I get the following :
Finally, if I specify
note ("First" "Second" , orientation (horizontally))
Stata says ") required"
I have wasted an hour tring to fix it but could not. If I do not include
note everything works perfectly.
Here is the actual graph:
twoway (kdensity wtototal if wtototal<=15&wtototal>=0) (kdensity
wtototal if a==1&wtototal<=15&wtototal>=0, bfcolor (yellow)) (kdensity
wtototal if a==10&wtototal<=15&wtototal>=0, bfcolor (red)),
ytitle(Distribution of welfare gains, margin(small)) xtitle(Welfare
gains as a percent of consumption from WTO accession, margin(small))
xlabel(-1(3)15, labsize(small)) xmlabel(-1(1)15, ticks nolabels)
xscale(range(0 15)) title(Figure 3. Distributions of the estimated
welfare gains from Russian WTO accession, size(medsmall)) subtitle("for
the entire sample, the poorest decile, and the richest decile.", size
(medsmall)) legend(order(1 "all households" 2 "the poorest decile" 3
"the richest decile") size(small)) note ( "Observations in a range from
0% to 15% are shown. Deciles are constructed" "to be representative of
ten percent of Russian population based on the weights of the Household
Budget Survey", size(vsmall) )
Thank you,
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-11/msg00225.html","timestamp":"2014-04-16T22:58:28Z","content_type":null,"content_length":"6289","record_id":"<urn:uuid:7f90a809-e94d-4491-babe-5c314162954b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] special term describing f :: (Monad m) => (a ->
m b) ?
[Haskell-cafe] special term describing f :: (Monad m) => (a -> m b) ?
Robert Dockins robdockins at fastmail.fm
Tue Dec 14 11:27:57 EST 2004
Ralf Hinze wrote:
>>What is a function of the followning type called:
>>f :: (Monad m) => (a -> m b)
>>Is there a special term describing such a function (a function into a monad)?
> It is often called a procedure or an effectful function (I assume that
> `a' and `b' are not meant to be universally quantified). I sometimes
> use a special arrow -|> for this type.
> Cheers, Ralf
I can understand how calling this kind of function "effectual" makes
sense in the "magic" IO monad, or perhaps even in the ST and State
monads, because the term seems to imply side-effects. However, it is a
misnomer for eg, the Error, List and Cont monads. Most (all but IO)
monads are actually just ways of simplifying the expression of truly
pure functions. It seems to me that terms like "procedure" and
"effectual function" blur the distinction between the special IO monad
and all the others. I think this is an important point, because many
people new to Haskell are confused about what monads are and how they
work; it is generally some time before they realize that IO is special
and begin to understand how the other monads fit in. I think we would
do a disservice to perpetuate terminology which encourages this
Unfortunatly, I don't have a better idea. Perhaps someone more clever
than I can invent/find a term which captures the relavant concept.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2004-December/008073.html","timestamp":"2014-04-16T13:30:58Z","content_type":null,"content_length":"4503","record_id":"<urn:uuid:7c25955e-68be-4701-952c-97dbd55ea672>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
668pages on
this wiki
In statistics, two quantities are said to be correlated if greater values of one tend to be associated with greater values of the other (positively correlated), or if greater values of one tend to be
associated with lesser values of the other (negatively correlated). The correlation (or, more formally, correlation coefficient) between two variables is a number measuring the strength and usually
the direction of this relationship.
In the case of interval or ratio variables, non-zero correlation is often apparent in a scatterplot of the data points: positive correlation is reflected in an overall increasing trend in the points
(when viewed from left to right on the graph), whereas negative correlation appears as an overall decreasing trend.
Measures of correlation
Most measures of correlation take on values from −1 to 1, or from 0 to 1. Zero correlation means that greater values of one variable are associated with neither higher nor lower values of the other,
or possibly with both. A correlation of 1 implies a perfect positive correlation, meaning that an increase in one variable is always associated with an increase in the other (and possibly with an
increase of the same size always, depending on the correlation measure being used). Finally, a correlation of −1 means that an increase in one variable is always associated with a decrease in the
other (possibly always the same size).
Some measures of correlation include the following:
See also | {"url":"http://math.wikia.com/wiki/Correlate","timestamp":"2014-04-17T01:09:44Z","content_type":null,"content_length":"55997","record_id":"<urn:uuid:d5ff1066-a94c-4486-8643-f4069c7ab896>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method and system for calibrating an x-ray scanner by employing a single non-circular standard - Patent # 5095431 - PatentGenius
Method and system for calibrating an x-ray scanner by employing a single non-circular standard
5095431 Method and system for calibrating an x-ray scanner by employing a single non-circular standard
(3 images)
Inventor: Feldman, et al.
Date Issued: March 10, 1992
Application: 07/324,545
Filed: March 16, 1989
Inventors: Cornuejols; Dominique (Palaiseau, FR)
Feldman; Andrei (Paris, FR)
Assignee: General Electric CGR S.A. (Issy les Moulineaux, FR)
Primary Smith; Jerry
Assistant Huntley; David
Attorney Or Oblon, Spivak, McClelland, Maier & Neustradt
U.S. Class: 378/207
Field Of 378/18; 378/207; 250/252.1R; 364/413.13
U.S Patent 4225789; 4331869; 4352020; 4400827; 4497061; 4663772; 4873707
Patent 107253; 154429; 216507; 218367; 239647; 3412303
Other Gonzalez et al., Digital Image Processing, Addison-Wesley Pub. Co., 1987, pp. 163-175..
Abstract: The invention relates to x-ray scanners and more particularly to a method for calibrating scanners by means of a single standard of elliptical shape, for example. Attenuation
measurements are performed with respect to a number of principal angular positions of the scanner about the standard. In addition, a number of meaurements or views are taken on each
side of this principal position in order to compute a mean attenuation with respect to each channel. The attenuation curve is then smoothed by filtering and polynomial approximation.
Claim: What is claimed is:
1. A method for calibrating an x-ray scanner having an x-radiation source and an N-channel detection device with a single standard having a shape other than circular,comprising the
steps of:
a) positioning the standard between the x-radiation source and the detection device;
b) measuring attenuations in the N channels of the detection device with respect to P principal angular positions of a rotating structure and with respect to n elementary positions or
views in close proximity to each other about each principalangular position;
c) computing the means value of the n attenuations with respect to each channel and with respect to each of the P principal angular positions;
d) smoothing the N mean values of attenuations from one channel to the next with respect to each of the P principal angular positions, thereby obtaining a curve of response of
attenuation as a function of the position of the channel, therebyeliminating high-frequency components, said response curve thus obtained representing the real variation of the
attenuation introduced by the standard as a function of the respective position of the N channels;
wherein there are an even number of P principal angular positions and said positions form pairs that are 180 degrees apart and wherein the mean values of attenuations for the
detection channels of each member of each said pair are determined byadding together attenuation values of both members of each said pair.
2. A method of calibration according to claim 1, wherein the smoothing step includes a filtering step.
3. A method of calibration according to claim 2, wherein the smoothing step further includes a polynomial approximating step performed on the curve which results from filtering.
4. A method of calibration according to claim 2 or claim 3, wherein the filtering step consists in forming the Fourier transform of the N mean values corresponding to one of the P
principal angular positions, in filtering said Fourier transformto form a filtered Fourier transform, thereby eliminating the high-frequency components, and then forming the inverse
Fourier transform of said filtered Fourier transform.
5. A method according to claim 1, wherein the standard has a shape which is symmetrical with respect to two axes and wherein mean values are determined for attenuations corresponding
to symmetrical positions of the standard with respect to saidaxes.
6. A method according to claim 1, wherein said X-ray scanner includes:
first means for computing the logarithm of the N signals delivered by the N detectors;
second means for computing for each channel the attenuation introduced by the standard;
third means for computing for each channel the mean value of the n measured attenuations with respect to a given principal angular position;
fourth means for computing the Fourier transform of the N mean attenuations corresponding to a principal angular position;
fifth means for eliminating the high-frequency components in the spectrum resulting from the Fourier transform;
sixth means for computing the inverse Fourier transform of the low-frequency components of the spectrum;
seventh means for computing a polynomial approximation of the curve resulting from the inverse Fourier transform;
and means for retaining in memory the values representing the polynomial approximation of each response curve relative to a principal angular position.
7. A method of calibration according to claim 1, wherein mean values of attenuation for each of the N detectors are determined for a selected principal angular position, by adding
together all attenuation values associated with principal angularpositions having, for each channel in said N-channel detection device, an equivalent path length that x-rays must
travel through the standard, in order to be detected by said channel.
Description: BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to x-ray scanners and more particularly to a method for calibrating devices of this type which makes use of a single non-circular standard and to a system for
the application of said method.
2. Description of the Prior Art
In order to examine a patient, it is becoming an increasingly common practice to make use of x-ray devices known as scanners which produce images of cross-sections of the patient.
These devices are based on the physical phenomenon of absorptionof x-rays by the human body. This absorption is directly related to the distance x traveled by x-rays within the body
in accordance with the formula:
I.sub.o is the intensity of radiation entering the human body
I is the intensity of radiation emerging from the human body,
b is a coefficient of attenuation which depends on the body being traversed.
In a logarithmic measurement scale, the attenuation I/I.sub.o is equal to bx or in other words is proportional to the distance x.
As shown in FIG. 1, these devices are essentially constituted by an x-ray source 10 associated with a detection device 11, these two elements being disposed in a fixed geometrical
relationship with respect to each other in such a manner as toensure that the body to be examined can be interposed between them. In addition, they are supported by a structure (not
shown in the drawings) which is capable of rotating about the body to be examined so as to irradiate the body at different angles. The x-ray source which is controlled by a device 13
emits its rays in an angular sector which is of sufficient width to illuminate the entire cross-section of the body. The detection device 11 has the shape of an annular sector, the
length of which isadapted to the width of the x-ray beam and is constituted by a large number of elementary detectors 12 in juxtaposed relation.
In order to obtain an image of the cross-section of the human body traversed by the x-ray beam, the structure which supports the source 10 and the detection device 11 is displaced in
rotation about the body and the output signals of theelementary detectors 12 are measured for suitable processing in accordance with known methods in order to obtain an image which is
representative of the cross-section. For this treatment, the elementary detectors 12 (also known as channels) areconnected to an electronic device 14 which first computes the
logarithm of the signals received so as to obtain a signal whose amplitude is proportional to the attenuation of the x-rays.
As a result of different phenomena which will not be explained here, the amplitude of the aforementioned signal in the case of each elementary detector or channel is not proportional
to the attenuation which has in fact been sustained. Inconsequence, in order to remedy this drawback, consideration has been given to various methods which consist for example in
recording the output signals of the channels in the presence of bodies having known dimensions and a known coefficient ofabsorption in order to compute the attenuations (logarithm
calculations) and to compare these measured attenuations with values computed as a function of the dimensions and of the absorption coefficient of the body or standard. These
comparisons make itpossible to deduce a law of correspondence or a modifying law between the measured values and the values which should be obtained. This law can be in the form of
correspondence files or of mathematical formulae representing this correspondence inrespect of each detection channel.
By way of example, the standards which are employed for performing these so-called calibration measurements are shims having different thicknesses which are introduced in proximity to
the x-ray source, thus entailing the need for handlingoperations at the level of the source in order to insert and remove said shims. Furthermore, the shape of these shims and their
position are far removed from the shape and position of the body of the patient to be examined, thus increasing thenon-linearity of the system.
In U.S. Pat. No. 4,352,020, it is proposed to employ circular shims 15 to 17 having different diameters which are disposed at the center of rotation of the support structure. This
makes it possible to come closer to the conditions ofmeasurements which will be made on the body to be examined. This patent also proposes to make use of a standard in the form of a
circular-section cone which is displaced transversely with respect to the beam so as to obtain different lengths ofattenuation. With the standards described, the measurements are
performed in respect of a predetermined position of the support structure and in the case of each standard.
FIG. 2 shows the shape of three response curves 20, 21 and 22 of attenuation as a function of the position of the channels in the case of measurements on three standards of circular
shape. The measured values are represented by the dots and varyin the vicinity of a mean value which represents the theoretical value in a linear system. These curves can be employed
as follows : when the measured signal corresponds to a point A, it will be deduced therefrom that the linear signal is the point A'of the mean curve 20. When the measured signal
corresponds to a point B located between the curves 20 and 21, the linear signal will be deduced therefrom by interpolation between the curves 20 and 21. This interpolation can be
computed in accordancewith a linear law or more generally a polynomial law.
The curves 23 and 24 of FIG. 3 show in a different form the principle of calibration at the level of a channel. These curves describe within a given channel the attenuation as a
function of the thickness x in the case of measured values (curve23) and in the case of computed values (straight line 24). In fact, the measured values give points which are joined
to each other in accordance with a predetermined law, namely either linear or polynomial, so as to obtain a continuous curve. Whenmeasuring an attenuation, this corresponds for
example to point C of curve 23 and there is deduced therefrom the linear value corresponding to point C' of curve 24.
The U.S. patent cited earlier describes a device in which the correspondence between the measured values and the real values of attenuation is effected by a system of files created
during the calibration operation. In regard to interpolation,the patent proposes linear, cubic and biquadratic interpolations but only the linear interpolation is described in detail.
The methods of calibration which have been briefly described in the foregoing suffer from a major disadvantage in that they call for the use of a number of standards, thus involving a
large number of handling operations. Moreover, these handlingoperations have to be accurate, especially in the case of circular standards, the different centers of which must coincide
with the center of rotation of the structure.
It is worthy of note that the U.S. patent proposes to employ a single standard which would have the shape given by FIG. 12 of said patent and to rotate the structure about said
standard, thereby obtaining absorption paths of different lengthsaccording to the angular position of the structure. However, this mode of operation is only mentioned and does not
indicate either the method or the means for applying the method in this case.
SUMMARY OF THE INVENTION
One object of the present invention is to carry out a method of calibration which makes use of a single standard having a shape other than circular and in particular an elliptical
shape or a shape which would be symmetrical with respect to twocoplanar axes.
Another object of the present invention is to provide a system for carrying out said method.
The invention relates to a method for calibrating an x-ray scanner comprising an x-radiation source and an N-channel detection device by making use of a single standard having a shape
other than circular, said method being distinguished by thefact that it involves the following operations :
a) positioning of the standard between the x-radiation source and the detection device ;
b) measurement of attenuations in the N channels of the detection device in respect of P principal angular positions of the rotating structure and in respect of n elementary positions
in close proximity to each other about each principal angularposition ;
c) computation of the mean value of the n attenuations in respect of each channel and in respect of each of the P principal angular positions ;
d) smoothing of the N mean values of attenuations from one channel to the next in respect of each of P principal angular positions so as to obtain a curve of response of attenuation
as a function of the position of the channel in which thehigh-frequency components have been eliminated,
said response curve thus obtained being the real curve of variation of the attenuation introduced by the standard as a function of the respective position of the N channels.
The invention also relates to a system for carrying out the method of calibration, said system being distinguished by the fact that it comprises :
first means for computing the logarithm of the N signals delivered by the N channels,
second means for computing in the case of each channel the attenuation introduced by the standard,
third means for computing in the case of each channel the mean value of the n measured attenuations in respect of a given principal angular position,
fourth means for computing the Fourier transform of the N mean attenuations corresponding to a principal angular position,
fifth means for eliminating the high-frequency components in the spectrum resulting from the Fourier transform,
sixth means for computing the inverse Fourier transform of the low-frequency components of the spectrum,
seventh means for computing a polynomial approximation of the curve resulting from the inverse Fourier transform,
and means for retaining in memory the values representing the polynomial approximation of each response curve relative to a principal angular position.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic diagram of an x-ray scanner in which calibration is obtained by means of circular standards.
FIG. 2 is a diagram showing different curves of attenuation as a function of the position of the detectors or channels and of the diameter of the standard.
FIG. 3 is a diagram showing the curves of theoretical and measured attenuation in respect of a predetermined channel, as a function of the absorption path.
FIG. 4 is a schematic diagram of an x-ray scanner in which calibration is obtained in accordance with the characteristics of the invention.
FIG. 5 is a diagram showing different curves of attenuation as a function of the position of the detector or channel and of the position of the x-ray source and of the detectors with
respect to the single standard.
FIG. 6 is a functional diagram of a system for processing the output signals of the detectors.
DETAILED DESCRIPTION OF THE INVENTION
FIGS. 1, 2 and 3 have already served to define the prior art in the introductory part of this specification and will not be described further.
The diagram of FIG. 4 is similar to that of FIG. 1 in that it shows a x-ray scanner comprising an x-ray source 30 and a detection device 31 having a plurality of detectors or channels
35 between which is interposed a patient's body in normaloperation or a standard 32 of elliptical shape at the time of calibration operations. Although not shown in the drawings, a
structure is provided for supporting the source 30 and the device 31 and for rotating the assembly about the patient's body orthe standard, the axis of rotation being defined by the
point 33 located within the standard but not necessarily at its geometrical center. The x-ray source 30 is controlled by a control device 34 whilst the different detectors or channels
35 of thedevice 31 are connected to an electronic system 36 which will be described with reference to FIG. 6.
The method of calibration in accordance with the invention consists in placing the support structure in different angular positions or so-called principal angular positions about the
standard and in carrying out a series of attenuationmeasurements in each angular position and on each side of this position. By way of example, it is possible to choose sixteen
principal angular positions uniformly spaced at 22.degree.30' and to perform, for example, ten elementary measurements on allthe detection channels about each principal angular
position. By way of example, if the number of detectors is 1024, the total number of views will be 16.times.10:160 and the total number of measurements will be 1024.times.160:163840.
By way ofexample, each view about a principal angular position is separated by an angle of approximately one-third of a degree.
Accordingly, for each principal angular position, there are ten views on each of the 1024 channels or in other words ten measurements per channel in an angular sector of approximately
three degrees and each measurement corresponds to a slightlydifferent path within the standard and therefore to a slightly different attenuation. In accordance with the invention, in
each channel, these ten measurements are employed for computing a mean value which makes it possible to improve thesignal-to-noise ratio by a coefficient .sqroot.10. These mean values
which are sixteen in number serve to trace in respect of each principal angular position a curve of variation of attenuation as a function of the position of the channels. In
fact,this number of curves can be reduced to four since the sixteen principal angular positions are reduced to four if the symmetries between the principal angular positions and those
of the standard are taken into account.
These curves of variation are shown in FIG. 5 in the case of three of the four principal angular positions after the symmetries have been taken into account. In this figure, the dots
40, 41 or 42 represent the mean values of attenuation for acertain number of the 1024 channels and curve 43 results from a smoothing operation in accordance with the present
invention, this operation being described hereinafter with reference to FIG. 6.
Curve 43 corresponds to the angular position of FIG. 4 in which the elliptical standard 32 is irradiated along the major axis of the ellipse. Curve 44 corresponds to the angular
position shown in dashed lines in FIG. 4 in which the standard 32is irradiated along the minor axis of the ellipse. Finally, curve 45 corresponds to an intermediate position between
the two positions mentioned above.
Moreover, as indicated in the foregoing, the sixteen principal angular positions are symmetrical in pairs, the two symmetrical positions being separated in pairs by an angle of
180.degree.. These two symmetrical positions produce identicalattenuation curves and only one need accordingly be retained. In fact, in accordance with the invention, it is proposed
to combine the measurements obtained from these two symmetrical positions by determining their mean value in the manner which will beexplained in greater detail with reference to FIG.
6. The signal-to-noise ratio is accordingly improved by a factor .sqroot.2, namely a total of .sqroot.20.
Furthermore, if use is made of a standard of symmetrical shape with respect to two axes such as orthogonal axes, for example, it is possible to group the measurements corresponding to
positions which are symmetrical with respect to these axes. In consequence, the sixteen principal positions which have been reduced to eight by the first symmetry may again be grouped
in pairs by determining their mean value, thus finally obtaining four principal angular positions. The signal-to-noise ratio isagain improved by .sqroot.2, namely a total of
The principal aspects of the method of calibration in accordance with the invention are :
utilization of a non-circular standard;
measurement of output signals of the N channels in respect of P principal angular positions of the source-detector structure and in respect of n elementary positions or views which
are close together about each principal angular position ;
computation of the mean value of the n measurements in respect of each channel and in respect of each of the P principal angular positions ;
smoothing of the mean values from one channel to the next in respect of each principal angular position so as to obtain a curve of response in which the high frequency components have
been eliminated.
In the case of uniformly spaced and even-numbered principal angular positions, it is intended to compute the mean value of the measurements corresponding to principal positions spaced
at 180.degree.. Furthermore, if the standard has asymmetrical shape with respect to two axes, computation of the mean value is also carried out on the measurements obtained in respect
of positions which are symmetrical with respect to said axes.
In order to implement the method, it is proposed to construct a system in accordance with the functional diagram given in FIG. 6. In this figure, the signals delivered by the 1024
detectors are applied to a coding circuit 50 which suppliesdigital values or codes. These codes will be used for all the operations which will be described hereinafter and which can
be carried out by means of a suitably programmed computer.
The N codes resulting from a measurement at an elementary position or view are applied to a logarithm computation circuit 51. The logarithmic values are subtracted in a circuit 52
from a reference logarithmic value REF delivered by a detectordesignated as a monitor which directly receives the x-radiation without attenuation. This difference therefore gives the
value of attenuation introduced by the standard with respect to a path in the air. These N differential values, which eachcorrespond to one channel, are then subtracted in a circuit
53 from N logarithmic attenuation values measured without any standard or in other words along paths in the air. These values have been measured and computed prior to said method of
calibrationand are subsequently employed in known manner at the time of measurements made on the patient. For this reason, they are recorded in a memory 54. This second subtraction
operation makes it possible to take into account part of the disparities betweenthe channels and therefore to suppress their influence from one channel to the next. The N values
resulting from this second subtraction are recorded in a memory 55 which is designed to receive the N.P.n. codes which will result from the n measurementsmade (or n views taken) at n
elementary positions about a principal angular position. The n.N. codes are used for computing the mean value on each channel by means of a circuit 56. The codes of the N mean values
are recorded in a memory 57. It isunderstood that the memory 55 and the computing circuit 56 can be constructed in the form of an adding circuit which would carry out nine successive
addition operations.
In order to benefit by the effects of symmetry of the P principal angular positions and of the standard, several modes of operation are open to choice. One mode consists in carrying
out the measurements successively for four symmetricalprincipal angular positions and in determining the mean value for one channel, not on ten values but on forty values. This calls
for a memory 55 designed to receive 40.N codes. Another mode of operation consists in making use of P memories 57, namelyone memory per principal angular position, and in performing a
second computation of the mean value on the values contained in the four memories corresponding to the four symmetrical principal angular positions.
After these operations involving computation of the mean value which have the principal object of reducing noise and therefore improving the signal-to-noise ratio, there then take
place the smoothing operations which are of two orders. First ofall, an operation involving elimination of the high-frequencies in the circuits 58, 59 and 60 and an operation
involving polynomial approximation in a circuit 61.
The operation which involves elimination of the high frequencies first consists of a Fourier transform in the circuit 58 in order to obtain the spectrum of the signal on the N
channels, then in low-pass filtering in the circuit 59 which removesthe high-frequency components and, finally, an inverse Fourier transform so as to revert to the initial signal but
without any high-frequency component.
The polynomial approximation operation in the circuit 61 can be limited to the central portion of the curves 43, 44 and 45 of FIG. 5. This operation is carried out by all known
methods and means. The results of this polynomial approximation arerecorded in a memory 62 which can contain the numerical values of the smoothed curves P, P/2 or P/4 depending on
whether the above-mentioned symmetries are taken into account or not, the curves being computed one after the other.
The values recorded in the memory 62 at the end of the operations in respect of P principal angular positions constitute calibration values which are employed in known manner at the
time of operations involving measurement on the patient so as tocompute the real value of attenuation from the measured value.
Each smoothed curve of the memory 62 corresponds to a non-smoothed curve of the memory 57 and the difference between the values of these two curves in respect of one and the same
channel makes it possible to compute a difference value. Thuswhen, in the case of this channel, the difference between the value measured with the patient and the smoothed value is
within this difference value, it is accordingly deduced that the real value is that of the smoothed curve. When this differenceexceeds said difference value, it is necessary to
perform an interpolation computation of the linear or polynomial type in order to determine the real value between two smoothed curves.
The system for carrying out the method has been described in the form of functional circuits but it is clearly apparent that this system can be realized in the case of a computer in
which the different operations hereinabove described areperformed by programming.
* * * * *
Randomly Featured Patents | {"url":"http://www.patentgenius.com/patent/5095431.html","timestamp":"2014-04-18T19:06:12Z","content_type":null,"content_length":"45390","record_id":"<urn:uuid:778d6701-e037-4a66-badc-e3b009ab39db>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
if n is an integer write an expression that must be a positive intger
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50aff355e4b09749ccac193c","timestamp":"2014-04-18T03:36:53Z","content_type":null,"content_length":"48827","record_id":"<urn:uuid:2dff9f60-2f48-42cd-9945-8f35b9882fec>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2013 [00123]
[Date Index] [Thread Index] [Author Index]
Re: An analytical solution to an integral not currently in
• To: mathgroup at smc.vnet.net
• Subject: [mg131376] Re: An analytical solution to an integral not currently in
• From: Daniel <dosadchy at its.jnj.com>
• Date: Tue, 16 Jul 2013 05:57:05 -0400 (EDT)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-outx@smc.vnet.net
• Delivered-to: mathgroup-newsendx@smc.vnet.net
First I have to say that "Another system's unconfirmed answer" is not a good enough reason for such a topic title.
For the math:
The "another system answer" is correct only for a=0. And Mathematica's Integrate[] gives the same answer up to a constant.
However, for non-zero a, the given analytical expression is not correct, as can be seen by plotting the following:
f[a_, b_, x1_, x2_] := NIntegrate[1/Sqrt[Log[x] + a x + b], {x, x1, x2}]
g[a_, b_, x_] := -Sqrt[\[Pi]] I Exp[-a x - b] Erf[I Sqrt[Log[x] + a x + b]]
Plot[{f[1, 0, 1, x], g[1, 0, x] - g[1, 0, 1]}, {x, 1, 25}]
Plotting for a=0 will show identity:
Plot[{f[0, 1, 1, x], g[0, 1, x] - g[0, 1, 1]}, {x, 1, 25}]
> Question: Integral dx of 1/sqrt(Log[x] + a*x + b)
> (sorry if my notation is off; I just used the online
> integrator and don't have Mathematica proper,
> http://integrals.wolfram.com/index.jsp?expr=1%2Fsqrt%2
> 8Log%5Bx%5D+%2B+a*x+%2B+b%29)
> (the online integrator returned this as of the time
> of writing this (2013-07-13): "Mathematica could not
> find a formula for your integral. Most likely this
> means that no formula exists." )
> Another system's unconfirmed answer (in that
> notation; sorry) (version 5.27.0):
> -sqrt(%pi)*%i*%e^(-a*x-b)*erf(%i*sqrt(log(x)+a*x+b))
> Strangely, the other system only produces this result
> when given, say, x(t) in all places for x (including
> variable of integration).
> I can't seem to get the other system to verify its
> result symbolically, but when I try random numerical
> sampling, it does seem to agree, albeit horribly
> plagued by floating point errors for large x.
> Can anyone offer insight, or possibly prove it's
> correctness or incorrectness? :)
> (P.S. I just joined this group, so apologies if it's
> the wrong one or I'm not following guidelines) | {"url":"http://forums.wolfram.com/mathgroup/archive/2013/Jul/msg00123.html","timestamp":"2014-04-16T19:15:40Z","content_type":null,"content_length":"27273","record_id":"<urn:uuid:9aeee89f-edec-4727-ad00-f9c2a4da370f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
General solutions for cos(1/x)
June 10th 2013, 08:19 AM #1
Junior Member
Dec 2012
General solutions for cos(1/x)
This may seem a bit too simple of a question:
Do we write solutions to cos(1/x) = 0 as:
1/x = PI/2 (+/-) n(PI) where n is an integer or
1/x = n(PI) (+/-) PI/2 where n is an integer?
Is one form of the solution better tan the other? If so, which is the better form?
Re: General solutions for cos(1/x)
Re: General solutions for cos(1/x)
If n= 4, then pi/2+ 4pi= (9/2)pi and cos((9/2)pi)= 0
1/x = n(PI) (+/-) PI/2 where n is an integer?
If n= 4, then 4(pi/2)+ pi= 3pi and cos(3pi)= -1.
Is one form of the solution better tan the other? If so, which is the better form?
June 10th 2013, 08:37 AM #2
June 10th 2013, 10:55 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/trigonometry/219716-general-solutions-cos-1-x.html","timestamp":"2014-04-20T11:45:28Z","content_type":null,"content_length":"40872","record_id":"<urn:uuid:ff22f121-99d8-44e0-80c3-fb9c8dc8e673>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
A207817 - OEIS
Number of walks in 4-dimensions using steps (1,0,0,0), (0,1,0,0), (0,0,1,0) and (0,0,0,1) from (0,0,0,0) to (n,n,n,n) such that after each step we have y>=x.
Number of possible necklaces consisting of n white beads, n-1 red beads, n-1 green beads, and n-1 blue beads (two necklaces are considered equivalent if they differ by a cyclic permutation).
Note: the generalizations of this formula and the relation between d-dimensional walks and d-colored necklaces are also true for all d, d>=5.
with(combinat, multinomial): seq(multinomial(4*n, n$4)/(n+1), n=0..20); | {"url":"http://oeis.org/A207817","timestamp":"2014-04-17T01:28:11Z","content_type":null,"content_length":"14224","record_id":"<urn:uuid:e052524e-fc19-492d-aa75-ca043ec0bbf1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the normal density of primes in small intervals, and the difference between consecutive primes
Results 1 - 10 of 15
"... This paper presents a new probabilistic primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic
polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] ..."
Cited by 69 (4 self)
Add to MetaCart
This paper presents a new probabilistic primality test. Upon termination the test outputs "composite" or "prime", along with a short proof of correctness, which can be verified in deterministic
polynomial time. The test is different from the tests of Miller [M], Solovay-Strassen [SSI, and Rabin [R] in that its assertions of primality are certain, rather than being correct with high
prob-ability or dependent on an unproven assumption. Thc test terminates in expected polynomial time on all but at most an exponentially vanishing fraction of the inputs of length k, for every k.
This result implies: • There exist an infinite set of primes which can be recognized in expected polynomial time. • Large certified primes can be generated in expected polynomial time. Under a very
plausible condition on the distribution of primes in "small" intervals, the proposed algorithm can be shown'to run in expected polynomial time on every input. This
- Scandanavian Actuarial J , 1995
"... “It is evident that the primes are randomly distributed but, unfortunately, we don’t know what ‘random ’ means. ” — R. C. Vaughan (February 1990). After the first world war, Cramér began
studying the distribution of prime numbers, guided by Riesz and Mittag-Leffler. His works then, and later in the ..."
Cited by 20 (1 self)
Add to MetaCart
“It is evident that the primes are randomly distributed but, unfortunately, we don’t know what ‘random ’ means. ” — R. C. Vaughan (February 1990). After the first world war, Cramér began studying the
distribution of prime numbers, guided by Riesz and Mittag-Leffler. His works then, and later in the midthirties, have had a profound influence on the way mathematicians think about the distribution
of prime numbers. In this article, we shall focus on how Cramér’s ideas have directed and motivated research ever since. One can only fully appreciate the significance of Cramér’s contributions by
viewing his work in the appropriate historical context. We shall begin our discussion with the ideas of the ancient Greeks, Euclid and Eratosthenes. Then we leap in time to the nineteenth century, to
the computations and heuristics of Legendre and Gauss, the extraordinarily analytic insights of Dirichlet and Riemann, and the crowning glory of these ideas, the proof the “Prime Number Theorem ” by
Hadamard and de la Vallée Poussin in 1896. We pick up again in the 1920’s with the questions asked by Hardy and Littlewood,
, 2008
"... Let HN denote the problem of determining whether a system of multivariate polynomials with integer coefficients has a complex root. It has long been known that HN ∈P = ⇒ P =NP and, thanks to
recent work of Koiran, it is now known that the truth of the Generalized Riemann Hypothesis (GRH) yields the ..."
Cited by 5 (4 self)
Add to MetaCart
Let HN denote the problem of determining whether a system of multivariate polynomials with integer coefficients has a complex root. It has long been known that HN ∈P = ⇒ P =NP and, thanks to recent
work of Koiran, it is now known that the truth of the Generalized Riemann Hypothesis (GRH) yields the implication HN ̸∈P = ⇒ P ̸=NP. We show that the assumption of GRH in the latter implication can be
replaced by either of two more plausible hypotheses from analytic number theory. The first is an effective short interval Prime Ideal Theorem with explicit dependence on the underlying field, while
the second can be interpreted as a quantitative statement on the higher moments of the zeroes of Dedekind zeta functions. In particular, both assumptions can still hold even if GRH is false. We thus
obtain a new application of Dedekind zero estimates to computational algebraic geometry. Along the way, we also apply recent explicit algebraic and analytic estimates, some due to Silberman and
Sombra, which may be of independent interest.
- MR 2595006 (2011a:11160) 34 Erez Lapid and Keith Ouellette, Truncation of Eisenstein series
"... In this paper, we are concerned with establishing bounds for L(1) where L(s) is a general L-function, and specifically, we shall be most interested in the case where no good bound for the size
of the coefficients of the L-function is known. In this case, results are available due to Iwaniec [9], [10 ..."
Cited by 3 (0 self)
Add to MetaCart
In this paper, we are concerned with establishing bounds for L(1) where L(s) is a general L-function, and specifically, we shall be most interested in the case where no good bound for the size of the
coefficients of the L-function is known. In this case, results are available due to Iwaniec [9], [10],
, 2008
"... In this paper we prove that inf max n∑ z ν k ∣ = √ n + O ( n 0.2625+ǫ). (ǫ> 0) |zk|≥1 ν=1,...,n2 k=1 This improves on the bound inf max n∑ |zk|≥1 ν=1,...,n2 z k=1 ν k ∣ ≤ √ 6n log(1 + n2) of
Erdős and Renyi. In the special case of n + 1 being a prime we have previously obtained the much sharper ..."
Cited by 2 (2 self)
Add to MetaCart
In this paper we prove that inf max n∑ z ν k ∣ = √ n + O ( n 0.2625+ǫ). (ǫ> 0) |zk|≥1 ν=1,...,n2 k=1 This improves on the bound inf max n∑ |zk|≥1 ν=1,...,n2 z k=1 ν k ∣ ≤ √ 6n log(1 + n2) of Erdős
and Renyi. In the special case of n + 1 being a prime we have previously obtained the much sharper result n ≤ inf max n∑ z ν k
"... The explicit construction of infinite families of d-regular graphs which are Ramanujan is known only in the case d−1 is a prime power. In this paper, we consider the case when d − 1 is not a
prime power. The main result is that by perturbing known Ramanujan graphs and using results about gaps betwee ..."
Cited by 1 (0 self)
Add to MetaCart
The explicit construction of infinite families of d-regular graphs which are Ramanujan is known only in the case d−1 is a prime power. In this paper, we consider the case when d − 1 is not a prime
power. The main result is that by perturbing known Ramanujan graphs and using results about gaps between consecutive primes, we are able to construct infinite families of “almost ” Ramanujan graphs
for almost every value of d. More precisely, for any fixed ǫ> 0 and for almost every value of d (in the sense of natural density), there are infinitely many d-regular graphs such that all the
non-trivial eigenvalues of the adjacency matrices of these graphs have absolute value less than (2 + ǫ) √ d − 1. 1
"... We study the relations between the distribution of the zeros of the Riemann zeta-function and the distribution of primes in "almost all" short intervals. It is well known that a relation like #
(x)-#(x-y) holds for almost all x [N, 2N ] in a range for y that depends on the width of the available ..."
Cited by 1 (1 self)
Add to MetaCart
We study the relations between the distribution of the zeros of the Riemann zeta-function and the distribution of primes in "almost all" short intervals. It is well known that a relation like #(x)-#
(x-y) holds for almost all x [N, 2N ] in a range for y that depends on the width of the available zero-free regions for the Riemann zeta-function, and also on the strength of density bounds for the
zeros themselves. We also study implications in the opposite direction: assuming that an asymptotic formula like the above is valid for almost all x in a given range of values for y, we find
zero-free regions or density bounds.
- Comp. Math , 1992
"... : In an earlier paper [FG] we showed that the expected asymptotic formula ß(x; q; a) ¸ ß(x)=OE(q) does not hold uniformly in the range q ! x= log N x, for any fixed N ? 0. There are several
reasons to suspect that the expected asymptotic formula might hold, for large values of q, when a is kept f ..."
Add to MetaCart
: In an earlier paper [FG] we showed that the expected asymptotic formula ß(x; q; a) ¸ ß(x)=OE(q) does not hold uniformly in the range q ! x= log N x, for any fixed N ? 0. There are several reasons
to suspect that the expected asymptotic formula might hold, for large values of q, when a is kept fixed. However, by a new construction, we show herein that this fails in the same ranges, for a fixed
and, indeed, for almost all a satisfying 0 ! jaj ! x= log N x. 1. Introduction. For any positive integer q and integer a coprime to q, we have the asymptotic formula (1:1) ß(x; q; a) ¸ ß(x) OE(q) as
x ! 1, for the number ß(x; q; a) of primes p x with p j a (mod q), where ß(x) is the number of primes x, and OE is Euler's function. In fact (1.1) is known to hold uniformly for (1:2) q ! log N x and
all (a; q) = 1, for every fixed N ? 0 (the Siegel--Walfisz Theorem), for almost all q ! x 1=2 = log 2+" x and all (a; q) = 1 (the Bombieri--Vinogradov Theorem) and for almost all q !...
"... Several results involving d(n!) are obtained, where d(m) denotes the number of positive divisors of m. These include estimates for d(n!)/d((n - 1)!), d(n!) - d((n - 1)!), as well as the least
number K with d((n +K)!)/d(n!) # 2. 1 ..."
Add to MetaCart
Several results involving d(n!) are obtained, where d(m) denotes the number of positive divisors of m. These include estimates for d(n!)/d((n - 1)!), d(n!) - d((n - 1)!), as well as the least number
K with d((n +K)!)/d(n!) # 2. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=201736","timestamp":"2014-04-17T05:24:44Z","content_type":null,"content_length":"35116","record_id":"<urn:uuid:f36e52ad-f45d-44e4-a376-268dd36ebe84>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Topological rigidity of compact manifolds in dimension three
up vote 7 down vote favorite
The Borel Conjecture asserts that homotopy equivalent aspherical closed manifolds are homeomorphic, which is still open in general. But, for three-dimensional manifolds, this conjecture holds (I read
this in Bessieres-Besson-Boileau), whose proof depends on the geometrization theorem (Perelman).
Question: Does the relative version of the Borel conjecture also hold for compact 3-manifolds with boundary (by the geometrization)? The relative version: If there is a homotopy equivalence between
two compact aspherical manifolds that is a homeomorphism between their boundaries, are those manifolds homeomorphic?
3-manifolds gt.geometric-topology
add comment
1 Answer
active oldest votes
up vote 12 When the manifolds are Haken this is a theorem of Waldhausen. See Ian Agol's answer here.
down vote
accepted Since your manifolds are aspherical, they are irreducible by the Poincaré conjecture. Since they have boundary and are irreducible, they are Haken.
Could you please give a brife proof that geometrization conjecture implies Borel conjecutre in dimension 3? I read [BBB], it seems they didn't explictely proved Borel Conjecture,
just said it's a corollary of GC, which I don't know why. – user16750 Oct 7 '11 at 2:52
2 @unknown: The Poincaré Conjecture implies that a $3$--manifold is irreducibile if it is aspherical. Say your manifolds are $M$ and $N$. If they are homotopy equivalent, they
have isomorphic fundamental groups. If this group does not contain a $\mathbb{Z} \oplus \mathbb{Z}$ subgroup, then $M$ and $N$ are hyperbolic, by Geometrization, and so, by Mostow
Rigidity, they are homeomorphic. – Richard Kent Oct 7 '11 at 3:10
2 @unkown: Continued: If this group contains a $\mathbb{Z} \oplus \mathbb{Z}$ subgroup, then, by the (strong) Torus Theorem, $M$ and $N$ are either Haken or small Seifert fibered
spaces. If both are Haken, then Waldhausen's theorem comes to the rescue. I think you do something like show that $\pi_1$ of a small SFS isn't an amalgamated product to rule out $M$
Haken and $N$ a small SFS. If both are small SFSs, then the euler class tells you they're homeomorphic. – Richard Kent Oct 7 '11 at 3:15
@unkown: Actually, to rule out $M$ Haken and $N$ small SFS, argue like this: $\pi_1$ has infinite cyclic center, and so $M$ is a SFS too (this is part of the proof of the Strong
2 Torus Theorem). The center is a completely algebraic thing, and so the quotient of $\pi_1(M)$ by the center is the orbifold fundamental group of the underlying orbifold of the
Seifert fibered structure on $M$. This must be isomorphic to the orbifold group for the underlying orbifold for $N$, and so $M$ is a small SFS. Euler class to the rescue. – Richard
Kent Oct 7 '11 at 3:28
add comment
Not the answer you're looking for? Browse other questions tagged 3-manifolds gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/77400/topological-rigidity-of-compact-manifolds-in-dimension-three","timestamp":"2014-04-20T06:13:02Z","content_type":null,"content_length":"56982","record_id":"<urn:uuid:bc12763e-a7f7-4331-a2b5-ce2ab12a2aed>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rakudo, the leading Perl6 implementation, is not perfect, and performance is a particularly sore subject. However, the pioneer does not ask ‘Is it fast?’, but rather ‘Is it fast enough?’, or perhaps
even ‘How can I help to make it faster?’.
To convince you that Rakudo can indeed be fast enough, we’ll take a shot at a bunch of Project Euler problems. Many of those involve brute-force numerics, and that’s something Rakudo isn’t
particularly good at right now. However, that’s not necessarily a show stopper: The less performant the language, the more ingenious the programmer needs to be, and that’s where the fun comes in.
All code has been tested with Rakudo 2012.11.
We’ll start with something simple:
Problem 2
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
The solution is beautifully straight-forward:
say [+] grep * %% 2, (1, 2, *+* ...^ * > 4_000_000);
Runtime: 0.4s
Note how using operators can lead to code that’s both compact and readable (opinions may vary, of course). We used
• whatever stars * to create lambda functions
• the sequence operator (in its variant that excludes the right endpoint) ...^ to build up the Fibonacci sequence
• the divisible-by operator %% to grep the even terms
• reduction by plus [+] to sum them.
However, no one forces you to go crazy with operators – there’s nothing wrong with vanilla imperative code:
Problem 3
What is the largest prime factor of the number 600,851,475,143?
An imperative solution looks like this:
sub largest-prime-factor($n is copy) {
for 2, 3, *+2 ... * {
while $n %% $_ {
$n div= $_;
return $_ if $_ > $n;
say largest-prime-factor(600_851_475_143);
Runtime: 2.6s
Note the is copy trait, which is necessary as Perl6 binds arguments read-only by default, and that integer division div is used instead of numeric division /.
Nothing fancy going on here, so we’ll move along to
Problem 53
How many, not necessarily distinct, values of ^nC[r], for 1 ≤ n ≤ 100, are greater than one-million?
We’ll use the feed operator ==> to factor the algorithm into separate steps:
[1], -> @p { [0, @p Z+ @p, 0] } ... * # generate Pascal's triangle
==> (*[0..100])() # get rows up to n = 100
==> map *.list # flatten rows into a single list
==> grep * > 1_000_000 # filter elements exceeding 1e6
==> elems() # count elements
==> say; # output result
Runtime: 5.2s
Note the use of the Z meta-operator to zip the lists 0, @p and @p, 0 with +.
The one-liner generating Pascal’s triangle has been stolen from Rosetta Code, another great resource for anyone interested in Perl6 snippets and exercises.
Let’s do something clever now:
Problem 9
There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc.
Using brute force will work (solution courtesy of Polettix), but it won’t be fast (~11s on my machine). Therefore, we’ll use a bit of algebra to make the problem more managable:
Let (a, b, c) be a Pythagorean triplet
a < b < c
a² + b² = c²
For N = a + b + c it follows
b = N·(N - 2a) / 2·(N - a)
c = N·(N - 2a) / 2·(N - a) + a²/(N - a)
which automatically meets b < c.
The condition a < b gives the constraint
a < (1 - 1/√2)·N
We arrive at
sub triplets(\N) {
for 1..Int((1 - sqrt(0.5)) * N) -> \a {
my \u = N * (N - 2 * a);
my \v = 2 * (N - a);
# check if b = u/v is an integer
# if so, we've found a triplet
if u %% v {
my \b = u div v;
my \c = N - a - b;
take $(a, b, c);
say [*] .list for gather triplets(1000);
Runtime: 0.5s
Note the declaration of sigilless variables \N, \a, …, how $(…) is used to return the triplet as a single item and .list – a shorthand for $_.list – to restore listy-ness.
The sub &triplets acts as a generator and uses &take to yield the results. The corresponding &gather is used to delimit the (dynamic) scope of the generator, and it could as well be put into &
triplets, which would end up returning a lazy list.
We can also rewrite the algorithm into dataflow-driven style using feed operators:
constant N = 1000;
1..Int((1 - sqrt(0.5)) * N)
==> map -> \a { [ a, N * (N - 2 * a), 2 * (N - a) ] } \
==> grep -> [ \a, \u, \v ] { u %% v } \
==> map -> [ \a, \u, \v ] {
my \b = u div v;
my \c = N - a - b;
a * b * c
==> say;
Runtime: 0.5s
Note how we use destructuring signature binding -> […] to unpack the arrays that get passed around.
There’s no practical benefit to use this particular style right now: In fact, it can easily hurt performance, and we’ll see an example for that later.
It is a great way to write down purely functional algorithms, though, which in principle would allow a sufficiently advanced optimizer to go wild (think of auto-vectorization and -threading).
However, Rakudo has not yet reached that level of sophistication.
But what to do if we’re not smart enough to find a clever solution?
Problem 47
Find the first four consecutive integers to have four distinct prime factors. What is the first of these numbers?
This is a problem where I failed to come up with anything better than brute force:
constant $N = 4;
my $i = 0;
for 2..* {
$i = factors($_) == $N ?? $i + 1 !! 0;
if $i == $N {
say $_ - $N + 1;
Here, &factors returns the number of prime factors. A naive implementations looks like this:
sub factors($n is copy) {
my $i = 0;
for 2, 3, *+2 ...^ * > $n {
if $n %% $_ {
repeat while $n %% $_ {
$n div= $_
return $i;
Runtime: unknown (33s for N=3)
Note the use of repeat while … {…}, the new way to spell do {…} while(…);.
We can improve this by adding a bit of caching:
BEGIN my %cache = 1 => 0;
multi factors($n where %cache) { %cache{$n} }
multi factors($n) {
for 2, 3, *+2 ...^ * > sqrt($n) {
if $n %% $_ {
my $r = $n;
$r div= $_ while $r %% $_;
return %cache{$n} = 1 + factors($r);
return %cache{$n} = 1;
Runtime: unknown (3.5s for N=3)
Note the use of BEGIN to initialize the cache first, regardless of the placement of the statement within the source file, and multi to enable multiple dispatch for &factors. The where clause allows
dynamic dispatch based on argument value.
Even with caching, we’re still unable to answer the original question in a reasonable amount of time. So what do we do now? We cheat and use Zavolaj – Rakudo’s version of NativeCall – to implement
the factorization in C.
It turns out that’s still not good enough, so we refactor the remaining Perl code and add some native type annotations:
use NativeCall;
sub factors(int $n) returns int is native('./prob047-gerdr') { * }
my int $N = 4;
my int $n = 2;
my int $i = 0;
while $i != $N {
$i = factors($n) == $N ?? $i + 1 !! 0;
$n = $n + 1;
say $n - $N;
Runtime: 1m2s (0.8s for N=3)
For comparison, when implementing the algorithm completely in C, the runtime drops to under 0.1s, so Rakudo won’t win any speed contests just yet.
As an encore, three ways to do one thing:
Problem 29
How many distinct terms are in the sequence generated by a^b for 2 ≤ a ≤ 100 and 2 ≤ b ≤ 100?
A beautiful but slow solution to the problem can be used to verify that the other solutions work correctly:
say +(2..100 X=> 2..100).classify({ .key ** .value });
Runtime: 11s
Note the use of X=> to construct the cartesian product with the pair constructor => to prevent flattening.
Because Rakudo supports big integer semantics, there’s no loss of precision when computing large numbers like 100^100.
However, we do not actually care about the power’s value, but can use base and exponent to uniquely identify the power. We need to take care as bases can themselves be powers of already seen values:
constant A = 100;
constant B = 100;
my (%powers, %count);
# find bases which are powers of a preceeding root base
# store decomposition into base and exponent relative to root
for 2..Int(sqrt A) -> \a {
next if a ~~ %powers;
%powers{a, a**2, a**3 ...^ * > A} = a X=> 1..*;
# count duplicates
for %powers.values -> \p {
for 2..B -> \e {
# raise to power \e
# classify by root and relative exponent
++%count{p.key => p.value * e}
# add +%count as one of the duplicates needs to be kept
say (A - 1) * (B - 1) + %count - [+] %count.values;
Runtime: 0.9s
Note that the sequence operator ...^ infers geometric sequences if at least three elements are provided and that list assignment %powers{…} = … works with an infinite right-hand side.
Again, we can do the same thing in a dataflow-driven, purely-functional fashion:
sub cross(@a, @b) { @a X @b }
sub dups(@a) { @a - @a.uniq }
constant A = 100;
constant B = 100;
2..Int(sqrt A)
==> map -> \a { (a, a**2, a**3 ...^ * > A) Z=> (a X 1..*).tree } \
==> reverse()
==> hash()
==> values()
==> cross(2..B)
==> map -> \n, [\r, \e] { (r) => e * n } \
==> dups()
==> ((A - 1) * (B - 1) - *)()
==> say();
Runtime: 1.5s
Note how we use &tree to prevent flattening. We could have gone with X=> instead of X as before, but it would make destructuring via -> \n, [\r, \e] more complicated.
As expected, this solution doesn’t perform as well as the imperative one. I’ll leave it as an exercise to the reader to figure out how it works exactly ;)
That’s it
Feel free to add your own solutions to the Perl6 examples repository under euler/.
If you’re interested in bioinformatics, you should take a look at Rosalind as well, which also has its own (currently only sparsely populated) examples directory rosalind/.
Last but not least, some solutions for the Computer Language Benchmarks Game – also known as the Debian language shootout – can be found under shootout/.
You can contribute by sending pull requests, or better yet, join #perl6 on the Freenode IRC network and ask for a commit bit.
Have the appropriate amount of fun! | {"url":"https://perl6advent.wordpress.com/author/gerdr/","timestamp":"2014-04-18T08:03:34Z","content_type":null,"content_length":"29865","record_id":"<urn:uuid:2563ce8f-b97f-4b50-bc4c-f35106a9eb38>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
XYZ-Wing, sudokuwiki.org
In this example the candidate number is 1 and F9 is the Hinge. It can see a 1/2 in D9 and a 1/4 in F1. We can reason this way: If D9 contains a 2 then F1 and F9 become a naked pair of 1/4 - and the
naked pair rule applies. Same with F1. If that's a 4 then D9 and F9 become a naked pair of 1/2 each. If any of the three are 1 then 1 is still part of the formation. Any 1 visible to all three cells
must be removed, in this case in F7.
XYZ-Wing example 1: Load Example or : From the Start | {"url":"http://www.sudokuwiki.org/Print_XYZ_Wing","timestamp":"2014-04-18T23:15:21Z","content_type":null,"content_length":"6394","record_id":"<urn:uuid:27d79cea-6d05-4862-81a3-4cbe8bca6570>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parcs Cross-Section format
For a BWR-transient calculation I'm trying to convert cross-sections I got from PSG2/Serpent simulations in Parcs [CrossSection] Input files.
My Question:
Parcs makes use of
SNF Nu-fission cross section
SKF Kappa-fission cross section
I assume this means Nu (as in neutrons/fission-event times fission cross section) but I'm not familiar with kappa in this context. Anyone have an idea what it denotes?
Any and all documentation or insights about parcs cross-section input files would be more than welcome. | {"url":"http://www.physicsforums.com/showthread.php?t=416995","timestamp":"2014-04-18T03:01:25Z","content_type":null,"content_length":"22999","record_id":"<urn:uuid:1de5059e-a8b2-4ea7-a657-0a5651b49acb>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
urd — A discrete user-defined-distribution random generator that can be used as a function.
itableNum -- number of table containing the random-distribution function. Such table is generated by the user. See GEN40, GEN41, and GEN42. The table length does not need to be a power of 2
ktableNum -- number of table containing the random-distribution function. Such table is generated by the user. See GEN40, GEN41, and GEN42. The table length does not need to be a power of 2
urd is the same opcode as duserrnd, but can be used in function fashion.
For a tutorial about random distribution histograms and functions see:
● D. Lorrain. "A panoply of stochastic cannons". In C. Roads, ed. 1989. Music machine. Cambridge, Massachusetts: MIT press, pp. 351 - 379. | {"url":"http://www.csounds.com/manualOLPC/urd.html","timestamp":"2014-04-16T07:40:07Z","content_type":null,"content_length":"5597","record_id":"<urn:uuid:12396908-14c3-42fd-be34-90657fb0ac98>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bohr's semiclassical aproach
What makes the Bohr model "semi"-classical is that he required that the angular momentum of the electron be quantized (integer multiples of [itex]\hbar[/itex]).
Using this requirement : [tex] L = mvr = n\hbar [/tex]
and the classical equation for circular motion under a central (electrostatic) force :
[tex]F_{centripetal} = mv^2/r = ke^2/r^2 [/tex]
gives you Bohr's results for the energies, radii and orbital velocities of the different orbits. | {"url":"http://www.physicsforums.com/showthread.php?t=107524","timestamp":"2014-04-20T11:29:04Z","content_type":null,"content_length":"33950","record_id":"<urn:uuid:698c7e73-b1d9-4d9a-8dec-4749c2ac426f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kernel Matrix Evaluation
Canh Hao Nguyen, Tu Bao Ho
We study the problem of evaluating the goodness of a kernel matrix for a classification task. As kernel matrix evaluation is usually used in other expensive procedures like feature and model
selections, the goodness measure must be calculated efficiently. Most previous approaches are not efficient, except for Kernel Target Alignment that can be calculated in O(n2) time complexity.
Although KTA is widely used, we show that it has some serious drawbacks. We propose an efficient surrogate measure to evaluate the goodness of a kernel matrix based on the data distributions of
classes in the feature space. The measure not only overcomes the limitations of KTA, but also possesses other properties like invariance, efficiency and error bound guarantee. Comparative experiments
show that the measure is a good indication of the goodness of a kernel matrix. | {"url":"http://www.ijcai.org/papers07/Abstracts/IJCAI07-159.html","timestamp":"2014-04-17T06:53:13Z","content_type":null,"content_length":"1652","record_id":"<urn:uuid:d1494b75-4bff-4b08-a246-2b5c13733355>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics > Box Plot
Box Plots
In 1977, John Tukey published an efficient method for displaying a five-number data summary. The graph is called a boxplot (also known as a box and whisker plot) and summarizes the following
statistical measures:
• median
• upper and lower quartiles
• minimum and maximum data values
The following is an example of a boxplot.
Box Plot
The plot may be drawn either vertically as in the above diagram, or horizontally.
Interpreting a Boxplot
The boxplot is interpreted as follows:
• The box itself contains the middle 50% of the data. The upper edge (hinge) of the box indicates the 75th percentile of the data set, and the lower hinge indicates the 25th percentile. The range
of the middle two quartiles is known as the inter-quartile range.
• The line in the box indicates the median value of the data.
• If the median line within the box is not equidistant from the hinges, then the data is skewed.
• The ends of the vertical lines or "whiskers" indicate the minimum and maximum data values, unless outliers are present in which case the whiskers extend to a maximum of 1.5 times the
inter-quartile range.
• The points outside the ends of the whiskers are outliers or suspected outliers.
Boxplot Enhancements
Beyond the basic information, boxplots sometimes are enhanced to convey additional information:
• The mean and its confidence interval can be shown using a diamond shape in the box.
• The expected range of the median can be shown using notches in the box.
• The width of the box can be varied in proportion to the log of the sample size.
Advantages of Boxplots
Boxplots have the following strengths:
• Graphically display a variable's location and spread at a glance.
• Provide some indication of the data's symmetry and skewness.
• Unlike many other methods of data display, boxplots show outliers.
• By using a boxplot for each categorical variable side-by-side on the same graph, one quickly can compare data sets.
One drawback of boxplots is that they tend to emphasize the tails of a distribution, which are the least certain points in the data set. They also hide many of the details of the distribution.
Displaying a histogram in conjunction with the boxplot helps in this regard, and both are important tools for exploratory data analysis.
Statistics > Box Plot | {"url":"http://www.netmba.com/statistics/plot/box/","timestamp":"2014-04-20T05:49:38Z","content_type":null,"content_length":"8398","record_id":"<urn:uuid:a7d0ac27-2477-4276-bf6f-aa7629ee8e62>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find dy/dx in terms of t if x=te^t, y=-10t-10e^t. I know the steps but I don't know what e^t is....Is the derivative just e^t? Please help!
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c023ebe4b0689d52fe0b46","timestamp":"2014-04-18T00:16:16Z","content_type":null,"content_length":"46878","record_id":"<urn:uuid:2db9dd99-f0fb-41aa-877a-84ccb9fb83b5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |