content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[lammps-users] multiple partition runs
Dear forum,
I am not sure whether this is a lammps issue. Recently I figured out how to set multiple partition jobs using LAMMPS.
Though the script exactly works as predicted except one thing. The job continues to run forever even though all the commands
in the input script are executed and the loop is complete. Here is the input script
variable nsims equal 4
label loop1
variable t uloop ${nsims}
if $t == 1 then “variable n equal 20”
if $t == 1 then “variable r equal 5.26”
if $t == 2 then “variable n equal 20”
if $t == 2 then “variable r equal 5.41”
if $t == 3 then “variable n equal 200”
if $t == 3 then “variable r equal 5.41”
if $t == 4 then “variable n equal 200”
if t == 4 then "variable r equal 5.57" shell cd N_{n}/rt_{r} log log.lammps{t}
-------- BODY OF THE SCRIPT ----------------
run 1000000
write_restart restart.equil.*
next t
shell cd …/…/
jump in.ab_berendsen loop1
Any inputs will be appreciated.
Normally, you should trigger an exit when a variable
is exhausted. I would expt with several quick/short
runs to see if you can get it to work, using the
examples in the next/jump doc pages as starting points.
Yes, I did indeed run some short test runs and the outcome is the same, the
job continues to run even after the simulation has finished. One more thing
which I noticed is nothing is written to screen.* files. Also tried the following
Please post a simple, fast script with any additional input
files needed, and I'll try it out.
Find the attached bzip file which contains input script and necessary input data files.
Looking forward to hearing from you.
test.tar.bz2 (584 KB)
|
{"url":"https://matsci.org/t/lammps-users-multiple-partition-runs/8712","timestamp":"2024-11-07T01:01:53Z","content_type":"text/html","content_length":"21819","record_id":"<urn:uuid:f5f87c59-c800-40d6-b4dd-8beac21e0a18>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00030.warc.gz"}
|
EiffelStudio: an EiffelSoftware project - User contributions [en]EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2File:Eb2 iterator.pngFile:Eb2 container.pngEiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2File:Eb2 container.pngEiffelBase2EiffelBase2File:Eb2 iterator.pngFile:Eb2 container.pngEiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2EiffelBase2File:Eb2 iterator.pngEiffelBase2
https://dev.eiffel.com/api.php?action=feedcontributions&user=Kenga&feedformat=atom 2024-11-02T12:39:08Z User contributions MediaWiki 1.24.1 https://dev.eiffel.com/index.php?title=EiffelBase2&diff=
15154 2014-04-30T11:04:53Z <p>Kenga: /* Iterators */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest
version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The
source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * &
lt;e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2
are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and
documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of
descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2
differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a
container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the
class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is
omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the
class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|
1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces==
=<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations:
V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br
/> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you
can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item) <br /> end<br />
end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br
/> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you
can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at
(5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And
insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from
HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table
["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or
equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot derive a hash code
(or order) from any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's identity rather than its
content.<br /> For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br
/> feature -- Modification<br /> ...<br /> end<br /> <br /> <br /> class GUI_APPLICATION<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> -- Cars in the city and their graphical
views.<br /> <br /> escape_police (c: CAR)<br /> -- Drive `c' away, repaint it and update its view.<br /> require<br /> car_in_city: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change
location<br /> c.repaint (white) -- change color<br /> cars [c].update -- ... but still can be used to access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br
/> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br
/> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with
values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into
standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> =
==Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents
explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also
introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are
straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags.
Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model
components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to
the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model
of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model:
target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index:
INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br />
status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a
reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced
specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>
target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br
/> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before
compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result
(or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their
effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed
to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>
Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write
preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach
constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e
>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G
<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status
report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature --
Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br
/> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <=
index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose,
whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s
model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the
heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>
V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br
/> ...<br /> bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant
in <e>V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br />
EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [https://
bitbucket.org/nadiapolikarpova/traffic https://bitbucket.org/nadiapolikarpova/traffic]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14512 2012-06-27T13:22:34Z <p>Kenga: /*
Status and roadmap */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future
replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source
code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two
clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> -
Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br />
*Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation,
the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding
and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between
''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>
RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of
EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the
current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an
''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|
Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With
immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE
[STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br />
feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate
through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br />
end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br
/> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you
can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at
(5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And
insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from
HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table
["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or
equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot derive a hash code
(or order) from any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's identity rather than its
content.<br /> For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br
/> feature -- Modification<br /> ...<br /> end<br /> <br /> <br /> class GUI_APPLICATION<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> -- Cars in the city and their graphical
views.<br /> <br /> escape_police (c: CAR)<br /> -- Drive `c' away, repaint it and update its view.<br /> require<br /> car_in_city: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change
location<br /> c.repaint (white) -- change color<br /> cars [c].update -- ... but still can be used to access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br
/> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br
/> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with
values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into
standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> =
==Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents
explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also
introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are
straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags.
Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model
components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to
the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model
of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model:
target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index:
INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br />
status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a
reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced
specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>
target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br
/> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before
compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result
(or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their
effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed
to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>
Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write
preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach
constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e
>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G
<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status
report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature --
Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br
/> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <=
index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose,
whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s
model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the
heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>
V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br
/> ...<br /> bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant
in <e>V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br />
EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [https://
bitbucket.org/nadiapolikarpova/traffic https://bitbucket.org/nadiapolikarpova/traffic]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14405 2012-04-11T14:19:29Z <p>Kenga: /* Sets
and tables */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement
for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is
available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br
/> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model
Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The
library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts
associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "
taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers''
and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e>
stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into
two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for
brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable''
interface to the data,<br /> in other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class
hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces
you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations
on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} --
Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any
container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> &
lt;/e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br
/> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with
lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at
(5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And
insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from
HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table
["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or
equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot derive a hash code
(or order) from any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's identity rather than its
content.<br /> For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br
/> feature -- Modification<br /> ...<br /> end<br /> <br /> <br /> class GUI_APPLICATION<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> -- Cars in the city and their graphical
views.<br /> <br /> escape_police (c: CAR)<br /> -- Drive `c' away, repaint it and update its view.<br /> require<br /> car_in_city: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change
location<br /> c.repaint (white) -- change color<br /> cars [c].update -- ... but still can be used to access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br
/> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br
/> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with
values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into
standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> =
==Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents
explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also
introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are
straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags.
Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model
components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to
the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model
of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model:
target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index:
INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br />
status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a
reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced
specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>
target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br
/> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before
compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result
(or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their
effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed
to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>
Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write
preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach
constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e
>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G
<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status
report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature --
Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br
/> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <=
index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose,
whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s
model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the
heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>
V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br
/> ...<br /> bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant
in <e>V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br />
EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://
traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14404 2012-04-11T14:18:59Z <p>Kenga: /* Sets and tables */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the
repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e&
gt;structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library
of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to
allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and
features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is
a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite
sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers
and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the
diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in
other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients
read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> --
(Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br />
station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br />
do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same
thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br
/> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br />
do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the
first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br />
i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br
/> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br />
print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you
can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br />
create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table [&
quot;dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET
</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot derive a hash code (or order) from
any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's identity rather than its content.<br />
For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br /> feature --
Modification<br /> ...<br /> end<br /> <br /> <br /> class GUI_APPLICATION<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> -- Cars in the city and their graphical views.<br /> <br
/> escape_police (c: CAR)<br /> -- Drive `c' away, repaint it and update its view.<br /> require<br /> car_on_map: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change location<br /> c.repaint
(white) -- change color<br /> cars [c].update -- ... but still can be used to access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br /> Iterators in
EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br />
array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from
a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br />
(create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br />
EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its
abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a
special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to the target
container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class
in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: target,
sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER
<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status:
specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference
(to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced specifically
for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>target</e>
and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> Those
annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before compiling it.<br
/> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result (or the model of
the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their effects on the
model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed to be changed
by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>Current</e&
gt; and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: <br
/> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values
of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR&
lt;/e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at
current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br />
off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br
/> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index &
lt;= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each
of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to
provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make
sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>V_CONTAINER</e>:<br /> &
lt;e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br /> ...<br />
bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>
V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is
currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://
traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14403 2012-04-11T14:16:49Z <p>Kenga: /* Sets and tables */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the
repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e&
gt;structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library
of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to
allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and
features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is
a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite
sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers
and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the
diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in
other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients
read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> --
(Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br />
station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br />
do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same
thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br
/> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br />
do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the
first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br />
i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br
/> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br />
print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you
can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br />
create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table [&
quot;dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET
</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot derive a hash code (or order) from
any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's identity rather than its content.<br />
For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br /> feature --
Modification<br /> ...<br /> end<br /> <br /> <br /> class GUI_APPLICATION<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> <br /> escape_police (c: CAR)<br /> require<br />
car_on_map: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change location<br /> c.repaint (white) -- change color<br /> cars [c].update -- ... but still can be used to access its view<br />
end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream
into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br
/> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10&
quot;, agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br
/> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that
each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for
a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of
the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e&
gt; clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> --
List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE
[G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>
V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br
/> As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called
from the code can be also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification
code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular
features in their terms. <br /> For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and
(the models of) the arguments. <br /> For commands we define their effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e&
gt; clauses that list all the model queries whose values are allowed to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m&
lt;/e> <br /> for each model query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The
model-based contracts approach does not constrain the way in which you write preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed
otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br />
Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br />
class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition:
Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition:
Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move
cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail
(index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a
class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new
model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e
>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example,
look at <e>V_SET</e>, which inherits directly from <e>V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br />
note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br
/> Here the linking invariant, provided as part of the class invariant in <e>V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>
set</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> *
Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14402
2012-04-11T14:14:25Z <p>Kenga: /* Sets and tables */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest
version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The
source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * &
lt;e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2
are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and
documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of
descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2
differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a
container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the
class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is
omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the
class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|
1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces==
=<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations:
V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br
/> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you
can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br />
end<br /> end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]
<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more
advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before
position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x
> 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table
(keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat&
quot;] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a
custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]
<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br
/> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br
/> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>
V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but the content of the object can change, so you cannot
derive a hash code (or order) from any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERENCE_HASHABLE</e>,<br /> which derives a hash code from the object's
identity rather than its content.<br /> For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature -- Access<br /> color: COLOR<br /> <br />
location: POINT<br /> <br /> feature -- Modification<br /> ...<br /> end<br /> <br /> <br /> class MAP<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR, CAR_VIEW]<br /> <br /> escape_police (c:
CAR)<br /> require<br /> car_on_map: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change location<br /> c.repaint (white) -- change color<br /> cars [c].update -- ... but still can be used to
access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by
piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array
with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("
1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5
6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This
method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements
is a model for a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of
each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's &
lt;e>note</e> clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target:
V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence:
MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class
<e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer
index. <br /> As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to
be called from the code can be also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e>
note to a query to indicate that its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from
non-specification code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the
postconditions of regular features in their terms. <br /> For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e
>Current</e> and (the models of) the arguments. <br /> For commands we define their effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them
with <e>modify</e> clauses that list all the model queries whose values are allowed to be changed by the command.<br /> These clauses can be used by tools to generate additional
postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in the <e>modify</e>
clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: <br /> it is not necessary to express them through model queries if they can be
conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of
the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target,
sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br />
ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br />
ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current
position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front
(index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br />
===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own
model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model
query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in
the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br /> class V_CONTAINER
[G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain |=| set<br /> bag_definition:
bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>V_SET</e> completely defines an old model query <e>bag
</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It
has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/
index.php?title=EiffelBase2&diff=14401 2012-04-11T13:34:40Z <p>Kenga: /* Sets and tables */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a
general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br
/> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/
nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current
document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals=
=<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of
the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and
consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==
Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values.
A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>
V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e>
class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this
interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage
examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br
/> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do
<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br
/> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop
<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])
<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e
><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> --
Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x:
INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you
create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create
table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br />
end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br
/> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING):
INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> --
Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and &
lt;e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> Sometimes you want to use an object as a key in a table,<br /> but
the content of the object can change, so you cannot derive a hash code (or order) from any of its attributes.<br /> EiffelBase2 provides a utility class <e>V_REFERNCE_HASHABLE</e>,<br />
which derives a hash code from the object's identity rather than its content.<br /> For example:<br /> <e><br /> class <br /> CAR<br /> inherit <br /> V_REFERENCE_HASHABLE<br /> <br /> feature
-- Access<br /> color: COLOR<br /> <br /> location: POINT<br /> <br /> feature -- Modification<br /> ...<br /> end<br /> <br /> <br /> class MAP<br /> <br /> feature<br /> cars: V_HASH_TABLE [CAR,
CAR_VIEW]<br /> <br /> escape_police (c: CAR)<br /> require<br /> car_on_map: cars.has_key (c)<br /> do<br /> c.move (100, 100) -- change location<br /> c.repaint (white) -- change color<br /> cars
[c].update -- ... but still can be used to access its view<br /> end<br /> end<br /> </e><br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams.
Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br />
create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe
(create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe
(array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the
''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a
class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs
(references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions.
The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by
standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br />
<br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer
is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries
under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR
[G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end
<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> &
lt;/e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_LIST</e>), a mathematical sequence
of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e&
gt;sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br />
We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a
specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of
introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result (or the model of the result, in case the query returns a fresh
object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their effects on the model queries of <e>Current</e> and
the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed to be changed by the command.<br /> These clauses can be used
by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in
the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: <br /> it is not necessary to express them through
model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely
the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note
<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not
off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?
<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the
right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old
(sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e&
gt;<br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to
represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition
of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts
make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>V_CONTAINER</e>:<br /> <e><br /> note<br /> model: bag<br />
class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain |=| set<br />
bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>V_SET</e> completely defines an old model
query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH
Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga
https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14385 2012-04-05T15:33:31Z <p>Kenga: /* Inheritance */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2
is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.
<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/
nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current
document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals=
=<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of
the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and
consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==
Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values.
A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>
V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e>
class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this
interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage
examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br
/> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do
<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br
/> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop
<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])
<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e
><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> --
Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x:
INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you
create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create
table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br />
end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br
/> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING):
INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> --
Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and &
lt;e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a
special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY
[INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br
/> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create
{V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2
is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state
space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special
mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to the target
container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class
in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: target,
sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> -- List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER
<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status:
specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference
(to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced specifically
for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>target</e>
and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> Those
annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before compiling it.<br
/> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br /> For queries we define their result (or the model of
the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. <br /> For commands we define their effects on the
model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the model queries whose values are allowed to be changed
by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model query <e>m</e> of <e>Current</e&
gt; and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: <br
/> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values
of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR&
lt;/e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at
current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br />
off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br
/> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index &
lt;= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each
of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to
provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make
sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, look at <e>V_SET</e>, which inherits directly from <e>V_CONTAINER</e>:<br /> &
lt;e><br /> note<br /> model: bag<br /> class V_CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: set<br /> class V_SET[G]<br /> ...<br /> invariant<br /> ...<br />
bag_domain_definition: bag.domain |=| set<br /> bag_definition: bag.is_constant (1)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>
V_SET</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>set</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is
currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://
traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14384 2012-04-05T15:29:03Z <p>Kenga: /* Status and roadmap */</p> <hr />
<div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the
repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e&
gt;structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library
of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to
allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and
features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is
a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite
sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers
and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the
diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in
other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients
read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> --
(Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br />
station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br />
do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same
thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br
/> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br />
do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the
first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br />
i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br
/> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br />
print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you
can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br />
create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table [&
quot;dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET
</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream
into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br
/> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10&
quot;, agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br
/> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that
each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for
a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of
the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e&
gt; clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> --
List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE
[G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>
V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br
/> As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called
from the code can be also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification
code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular
features in their terms. <br /> For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and
(the models of) the arguments. <br /> For commands we define their effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e&
gt; clauses that list all the model queries whose values are allowed to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m&
lt;/e> <br /> for each model query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The
model-based contracts approach does not constrain the way in which you write preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed
otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br />
Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br />
class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition:
Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition:
Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move
cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail
(index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a
class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new
model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e
>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example,
suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br />
end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition:
bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class
invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br />
<br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http:
//traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14383 2012-04-05T15:27:38Z <p>Kenga: /* Models */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the
repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e&
gt;structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library
of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to
allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and
features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is
a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite
sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers
and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the
diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in
other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients
read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> --
(Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br />
station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br />
do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same
thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br
/> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br />
do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the
first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br />
i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br
/> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br />
print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you
can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br />
create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table [&
quot;dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET
</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream
into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br
/> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10&
quot;, agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br
/> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that
each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for
a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of
the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e&
gt; clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> target: V_LIST<br /> --
List to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE
[G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> <br /> Here we declared the model of class <e>
V_LIST_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_LIST</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br
/> As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called
from the code can be also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification
code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular
features in their terms. <br /> For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and
(the models of) the arguments. <br /> For commands we define their effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e&
gt; clauses that list all the model queries whose values are allowed to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m&
lt;/e> <br /> for each model query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The
model-based contracts approach does not constrain the way in which you write preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed
otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br />
Let us add a couple of features and model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br />
class V_LIST_ITERATOR [G]<br /> <br /> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition:
Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition:
Result = not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move
cursor.<br /> note<br /> modify: sequence<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail
(index + 1))<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a
class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new
model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e
>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example,
suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br />
end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition:
bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class
invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br />
<br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://
traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14382 2012-04-05T15:26:05Z <p>Kenga: /* Contracts */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the
repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e&
gt;structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library
of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to
allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and
features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is
a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite
sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers
and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the
diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in
other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients
read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br /> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> --
(Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br />
station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br />
do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same
thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br
/> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br />
do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the
first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br />
i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br
/> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br />
print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you
can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br />
create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table [&
quot;dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET
</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream
into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br
/> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10&
quot;, agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br
/> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that
each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for
a stack or a queue. <br /> A triple consisting of a reference to the target container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of
the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e&
gt; clause. For example:<br /> <e><br /> note<br /> model: target, sequence, index<br /> class V_ITERATOR [G]<br /> feature -- Access<br /> <br /> target: V_CONTAINER<br /> -- Container to
iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> -- Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br />
-- Sequence of elements in `target'.<br /> note<br /> status: specification<br /> deferred<br /> end<br /> end<br /> </e><br /> Here we declared the model of class <e>V_ITERATOR</e>
consisting of tree components: <br /> a reference (to <e>V_CONTAINER</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see,
model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be
also reused as model queries (as <e>target</e> and <e>index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that
its primary purpose is specification.<br /> Those annotations can be used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove
such features from the code before compiling it.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of regular features in their terms. <br />
For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments.
<br /> For commands we define their effects on the model queries of <e>Current</e> and the arguments.<br /> We also supply them with <e>modify</e> clauses that list all the
model queries whose values are allowed to be changed by the command.<br /> These clauses can be used by tools to generate additional postconditions <e>m = old m</e> <br /> for each model
query <e>m</e> of <e>Current</e> and other arguments that is not mentioned in the <e>modify</e> clause.<br /> <br /> The model-based contracts approach does not
constrain the way in which you write preconditions: <br /> it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in
the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and
model-based contracts into the class <e>V_LIST_ITERATOR</e> shown above:<br /> <e><br /> note<br /> model: target, sequence, index<br /> <br /> class V_LIST_ITERATOR [G]<br /> <br
/> feature -- Access<br /> <br /> item: G<br /> -- Item at current position.<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br
/> ...<br /> <br /> feature -- Status report<br /> <br /> off: BOOLEAN<br /> -- Is current position off scope?<br /> deferred<br /> ensure<br /> definition: Result = not sequence.domain [index]<br />
end<br /> ...<br /> <br /> feature -- Extension<br /> <br /> extend_right (v: G)<br /> -- Insert `v' to the right of current position.<br /> -- Do not move cursor.<br /> note<br /> modify: sequence
<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index) & v + sequence.tail (index + 1))<br /> end<br /> ...<br /> <br />
invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e&
gt;A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not
reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the
parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits
directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence,
index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result
:= bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old
model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at
ETH Zurich.<br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga
https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14381 2012-04-05T14:38:51Z <p>Kenga: /* Models */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a
general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br
/> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/
nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current
document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals=
=<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of
the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and
consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==
Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values.
A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>
V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e>
class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this
interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage
examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br
/> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do
<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br
/> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop
<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])
<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e
><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> --
Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x:
INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you
create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create
table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br />
end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br
/> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING):
INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> --
Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and &
lt;e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a
special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY
[INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br
/> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create
{V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2
is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state
space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special
mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence of elements is a model for a stack or a queue. <br /> A triple consisting of a reference to the target
container, the sequence of elements and an integer is a model for an iterator.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class
in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: target,
sequence, index<br /> class V_ITERATOR [G]<br /> feature -- Access<br /> <br /> target: V_CONTAINER<br /> -- Container to iterate over.<br /> deferred<br /> end<br /> <br /> index: INTEGER<br /> --
Current position.<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of elements in `target'.<br /> note<br /> status: specification
<br /> deferred<br /> end<br /> end<br /> </e><br /> Here we declared the model of class <e>V_ITERATOR</e> consisting of tree components: <br /> a reference (to <e>V_CONTAINER
</e>), a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. <br /> As you can see, model queries are not necessarily introduced specifically for specification
purposes (as is the case with <e>sequence</e>); <br /> regular queries meant to be called from the code can be also reused as model queries (as <e>target</e> and <e>
index</e> in this example). <br /> We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> Those annotations can be
used by tools to check, if a specification-only feature accidentally got called from non-specification code,<br /> and to remove such features from the code before compiling it.<br /> <br /> ===
Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in
case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the
class invariant. For commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is
equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through
model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely
the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br />
model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br />
definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition:
not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br
/> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br
/> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e
>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does
not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the
parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits
directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence,
index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result
:= bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old
model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at
ETH Zurich.<br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga
https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14380 2012-04-05T14:26:49Z <p>Kenga: /* Usage examples */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br />
EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel
development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://
bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The
rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br />
<br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly
as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set
of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>
V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br /> In the diagrams below asterisk and italics font indicates a <e>deferred</e>
class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br /> in other words, it is impossible to modify the content of the container using this
interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage
examples==<br /> <br /> ===Immutable interfaces===<br /> With immutable interfaces you can give your clients read-only access to a container you store:<br /> <e><br /> class TRAM_LINE<br /> <br
/> feature -- Access<br /> <br /> stations: V_SEQUENCE [STATION]<br /> -- Stations on the line.<br /> -- (Clients can traverse the stations, but cannot replace them, remove or add new ones.)<br /> do
<br /> Result := station_list<br /> end<br /> <br /> feature {NONE} -- Implementation<br /> <br /> station_list: V_LIST [STATION]<br /> -- List of line stations.<br /> end<br /> </e><br /> <br
/> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop
<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])
<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e
><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> --
Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x:
INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you
create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create
table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br />
end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br
/> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING):
INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> --
Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and &
lt;e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a
special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY
[INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br
/> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create
{V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2
is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state
space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special
mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list
with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under
the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br />
index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status:
specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>
MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>);
regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular features in
their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the
arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define their effects on the model queries of <e>Current</e> and the
arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not constrain
the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based
contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts
into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current
position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br
/> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> --
Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail
(index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br />
<br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its
own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old
model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense
in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br
/> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain =
sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant,
provided as part of the class invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br />
==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> It has been used in the following projects:<br /> * Traffic 4: modeling public
transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14378 2012-04-05T13:47:02Z <p>Kenga: /*
Design */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for
the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available
from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br />
* <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a
library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is
designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with
classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania"
(classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and ''streams''. A
container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an
infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies:
containers and streams/iterators. <br /> All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity.<br /> <br />
In the diagrams below asterisk and italics font indicates a <e>deferred</e> class.<br /> Lighter fill color indicates that the class provides an ''immutable'' interface to the data,<br />
in other words, it is impossible to modify the content of the container using this interface.<br /> <br /> [[Image:eb2_container.png|1000px|thumb|none|Container class hierarchy]]<br /> <br />
[[Image:eb2_iterator.png|1000px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e
><br /> do_something (container: V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br />
The same thing using the explicit syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i :=
container.new_cursor<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:
<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back
(0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after
it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> &
lt;e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] :
= 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on
keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive
table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br
/> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e&
gt;V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>
V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream
into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br
/> array.new_cursor.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10&
quot;, agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br
/> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that
each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or
a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You
define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br />
note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br />
sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e
> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for
specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this
example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing
model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a
function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define
their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not
modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can be
conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of
the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br />
class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence
[index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br
/> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br />
sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.
<br /> It has been used in the following projects:<br /> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://
dev.eiffel.com/index.php?title=File:Eb2_iterator.png&diff=14377 2012-04-05T13:40:05Z <p>Kenga: uploaded a new version of "Image:Eb2 iterator.png": Iterator hierarchy of EiffelBase2</p> <hr
/> <div>Class diagram of the iterator classes in EiffelBase2 library.</div> Kenga https://dev.eiffel.com/index.php?title=File:Eb2_container.png&diff=14376 2012-04-05T13:39:01Z <p>Kenga: uploaded a
new version of "Image:Eb2 container.png": Container Hierarchy of EiffelBase2</p> <hr /> <div>Class diagram of the container classes in EiffelBase2 library.</div> Kenga https://
dev.eiffel.com/index.php?title=EiffelBase2&diff=14375 2012-04-05T13:37:30Z <p>Kenga: </p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data
structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==
<br /> <br /> The latest version of the EiffelBase2 source code is available from the repository: [https://bitbucket.org/nadiapolikarpova/eiffelbase2/ https://bitbucket.org/nadiapolikarpova/
eiffelbase2/]<br /> <br /> The source code is divided into two clusters:<br /> <br /> * <e>structures</e> - Data Structures, the core of EiffelBase2. The rest of current document is about
this cluster.<br /> <br /> * <e>mml</e> - Mathematical Model Library: a library of immutable classes used in ''model-based contracts'' (see below).<br /> <br /> ==Goals==<br /> <br /> The
design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability
goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy,
in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top
level EiffelBase2 differentiates between ''containers'' and ''streams''. A container is a finite storage of values, while a stream provides linear access to a set of values. A stream is not
necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br />
Below you can find the class diagram of EiffelBase2, split into two hierarchies: containers and streams/iterators. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but
the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|800px|thumb|none|
Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container:
V_CONTAINER [INTEGER])<br /> do<br /> across<br /> container as i<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> The same thing using the explicit
syntax:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_cursor<br /> until<br /> i.after
<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list:
V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element
at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br />
</e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br />
table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] +
table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE,
for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent
{STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print
(table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>
V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br
/> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br />
<e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.new_cursor.pipe (create
{V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.new_cursor.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br
/> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.new_cursor)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> =
=Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a
mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets,
relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model
classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for
sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary
reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting
of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in
the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> --
Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a
mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the
case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>
status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the
postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e&
gt;Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define their effects on the model queries
of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> The model-based
contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br />
Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of
features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br
/> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature --
Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element
change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front
(index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count +
1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e&
gt;'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking
invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the
inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br />
<e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br />
bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e&
gt;<br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query &
lt;e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> It has been used in the following projects:<br
/> * Traffic 4: modeling public transportation in a city [http://traffic.origo.ethz.ch/ http://traffic.origo.ethz.ch/]</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14101
2011-05-02T13:16:48Z <p>Kenga: /* ToDo list */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is
intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> You can download a
stable release of EiffelBase2 from the downloads page: [http://eiffelbase2.origo.ethz.ch/download http://eiffelbase2.origo.ethz.ch/download]<br /> <br /> The latest version of the EiffelBase2 source
code is available in the repository: [https://svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2
are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and
documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of
descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2
differentiates between ''containers'' and interfaces to access elements of containers, called ''accessors''. A container is a finite storage of values. Accessors are either ''maps'' (accessing
elements by a unique key) or ''streams'' (linear access). An observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random
numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a hierarchy of
containers and maps. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|
800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br
/> ==Usage examples==<br /> <br /> ===Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i:
V_INPUT_ITERATOR [INTEGER]<br /> do<br /> from<br /> i := container.new_iterator<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br
/> Here is some more advanced stuff you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0
at or before position 5:<br /> list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN
do Result := x > 0 end)<br /> -- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a
simple hash table (keys must inherit from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br />
table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br />
If you need a custom hash function or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE
[STRING, INTEGER]<br /> do<br /> -- Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result :=
s.as_lower.hash_code end<br /> )<br /> table ["cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br
/> end<br /> </e><br /> <br /> Similar style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>
V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e> and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special
case of streams. Sometimes you can avoid writing a loop by piping an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br
/> do<br /> create array.make (1, 10)<br /> -- Fill array with random integers:<br /> array.at_first.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br />
array.at_first.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5 6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create
{V_STANDARD_OUTPUT}).pipe (array.at_first)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 "<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is
specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state
space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special
mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list
with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under
the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br />
index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status:
specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>
MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>);
regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular features in
their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the
arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define their effects on the model queries of <e>Current</e> and the
arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not constrain
the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based
contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts
into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current
position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br
/> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> --
Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail
(index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br />
<br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its
own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old
model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense
in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br
/> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain =
sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant,
provided as part of the class invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br />
==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> ===ToDo list===<br /> * Add support for the <e>across</e> syntax.<br
/> * Add more useful streams and iterators (for filtering, mapping, folding, extending lists).<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br />
* Add files and directories.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with
fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14100 2011-05-02T13:14:21Z <p>Kenga: /* Sets and tables */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> You can download a stable release of EiffelBase2 from the downloads page: [http:/
/eiffelbase2.origo.ethz.ch/download http://eiffelbase2.origo.ethz.ch/download]<br /> <br /> The latest version of the EiffelBase2 source code is available in the repository: [https://
svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library
is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with
classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania"
(classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and interfaces to
access elements of containers, called ''accessors''. A container is a finite storage of values. Accessors are either ''maps'' (accessing elements by a unique key) or ''streams'' (linear access). An
observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a hierarchy of containers and maps. All EiffelBase2 class names start
with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br
/> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Iterators===<br
/> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_INPUT_ITERATOR [INTEGER]<br /> do<br /> from<br /> i
:= container.new_iterator<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with
lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at
(5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And
insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple hash table (keys must inherit from
HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table
["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or
equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping
an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with
random integers:<br /> array.at_first.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.at_first.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5
6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.at_first)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 &
quot;<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.
<br /> <br /> ===ToDo list===<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite loops using the &
lt;e>across</e> where appropriate.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g.,
tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14099 2011-05-02T13:13:58Z <p>Kenga: /* Download */</p> <hr />
<div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> You can download a stable release of EiffelBase2 from the downloads page: [http:/
/eiffelbase2.origo.ethz.ch/download http://eiffelbase2.origo.ethz.ch/download]<br /> <br /> The latest version of the EiffelBase2 source code is available in the repository: [https://
svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library
is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with
classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania"
(classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers'' and interfaces to
access elements of containers, called ''accessors''. A container is a finite storage of values. Accessors are either ''maps'' (accessing elements by a unique key) or ''streams'' (linear access). An
observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers are called
''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a hierarchy of containers and maps. All EiffelBase2 class names start
with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br
/> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===Iterators===<br
/> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_INPUT_ITERATOR [INTEGER]<br /> do<br /> from<br /> i
:= container.new_iterator<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff you can do with
lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br /> list.at
(5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br /> -- And
insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple has table (keys must inherit from
HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br /> table
["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function or
equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping
an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with
random integers:<br /> array.at_first.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.at_first.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5
6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.at_first)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 &
quot;<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.
<br /> <br /> ===ToDo list===<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite loops using the &
lt;e>across</e> where appropriate.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g.,
tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14079 2011-03-21T18:59:39Z <p>Kenga: </p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available in the repository:
[https://svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The
library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts
associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "
taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers''
and interfaces to access elements of containers, called ''accessors''. A container is a finite storage of values. Accessors are either ''maps'' (accessing elements by a unique key) or ''streams''
(linear access). An observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers
are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a hierarchy of containers and maps. All EiffelBase2 class
names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]
<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Usage examples==<br /> <br /> ===
Iterators===<br /> Here is how you can iterate through any container:<br /> <e><br /> do_something (container: V_CONTAINER [INTEGER])<br /> local<br /> i: V_INPUT_ITERATOR [INTEGER]<br /> do<br
/> from<br /> i := container.new_iterator<br /> until<br /> i.after<br /> loop<br /> print (i.item)<br /> i.forth<br /> end<br /> end<br /> </e><br /> <br /> Here is some more advanced stuff
you can do with lists:<br /> <e><br /> do_something (list: V_LIST [INTEGER])<br /> local<br /> i: V_LIST_ITERATOR [INTEGER]<br /> do<br /> -- Find the last 0 at or before position 5:<br />
list.at (5).search_back (0)<br /> -- Find the first positive element at or after position 5:<br /> i := list.at (5)<br /> i.satisfy_forth (agent (x: INTEGER): BOOLEAN do Result := x > 0 end)<br />
-- And insert a 0 after it:<br /> i.extend_right (0)<br /> end<br /> </e><br /> <br /> ===Sets and tables===<br /> <br /> Here is how you create and use a simple has table (keys must inherit
from HASHABLE):<br /> <e><br /> do_something<br /> local<br /> table: V_HASH_TABLE [STRING, INTEGER]<br /> do<br /> create table.with_object_equality<br /> table ["cat"] := 1<br />
table ["dog"] := 2<br /> print (table ["cat"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> If you need a custom hash function
or equivalence relation on keys, you can use V_GENERAL_HASH_TABLE, for example:<br /> <e><br /> do_something<br /> local<br /> table: V_GENERAL_HASH_TABLE [STRING, INTEGER]<br /> do<br /> --
Create case-insensitive table:<br /> create table.make (<br /> agent {STRING}.is_case_insensitive_equal,<br /> agent (s: STRING): INTEGER do Result := s.as_lower.hash_code end<br /> )<br /> table [&
quot;cat"] := 1<br /> table ["dog"] := 2<br /> print (table ["CAT"] + table ["dog"])<br /> -- Prints "3"<br /> end<br /> </e><br /> <br /> Similar
style applies to <e>V_HASH_SET</e> and <e>V_GENERAL_HASH_SET</e>, <e>V_SORTED_TABLE</e> and <e>V_GENERAL_SORTED_TABLE</e>, <e>V_SORTED_SET</e&
gt; and <e>V_GENERAL_SORTED_SET</e>.<br /> <br /> ===Stream piping===<br /> <br /> Iterators in EiffelBase2 are a special case of streams. Sometimes you can avoid writing a loop by piping
an input stream into an output stream, for example:<br /> <e><br /> do_something<br /> local<br /> array: V_ARRAY [INTEGER]<br /> do<br /> create array.make (1, 10)<br /> -- Fill array with
random integers:<br /> array.at_first.pipe (create {V_RANDOM})<br /> -- Fill array with values parsed from a string:<br /> array.at_first.pipe (create {V_STRING_INPUT [INTEGER]}.make ("1 2 3 4 5
6 7 8 9 10", agent {STRING}.to_integer))<br /> -- Print array elements into standard output:<br /> (create {V_STANDARD_OUTPUT}).pipe (array.at_first)<br /> -- Prints "1 2 3 4 5 6 7 8 9 10 &
quot;<br /> end<br /> </e><br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.
<br /> <br /> ===ToDo list===<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite loops using the &
lt;e>across</e> where appropriate.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g.,
tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=File:Eb2_container.png&diff=14078 2011-03-21T17:48:26Z <p>Kenga: uploaded a new
version of "Image:Eb2 container.png"</p> <hr /> <div>Class diagram of the container classes in EiffelBase2 library.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14077
2011-03-21T13:26:12Z <p>Kenga: /* Design */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is
intended as the future replacement for the [[EiffelBase]] library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of
the EiffelBase2 source code is available in the repository: [https://svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design
goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal,
but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in
particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top
level EiffelBase2 differentiates between ''containers'' and interfaces to access elements of containers, called ''accessors''. A container is a finite storage of values. Accessors are either ''maps''
(accessing elements by a unique key) or ''streams'' (linear access). An observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of
pseudo-random numbers. Streams that traverse containers are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a
hierarchy of containers and maps. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br />
[[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator
class hierarchy]]<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each
class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects:
booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are
represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2
contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</
e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or
a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You
define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br />
note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br />
sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e
> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for
specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this
example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing
model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a
function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define
their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not
modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can be
conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of
the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br />
class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence
[index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br
/> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br />
sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.
<br /> <br /> ===ToDo list===<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite loops using the &
lt;e>across</e> where appropriate.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g.,
tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14075 2011-03-11T12:44:21Z <p>Kenga: </p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2 is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]]
library, which has for many years played a central role in Eiffel development.<br /> <br /> ==Download==<br /> <br /> The latest version of the EiffelBase2 source code is available in the repository:
[https://svn.origo.ethz.ch/eiffelbase2/trunk/ https://svn.origo.ethz.ch/eiffelbase2/trunk/]<br /> <br /> ==Goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The
library is designed to allow proofs of correctness.<br /> <br /> *Complete specifications. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts
associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "
taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> ==Design==<br /> On the top level EiffelBase2 differentiates between ''containers''
and interfaces to access elements of containers, called ''observers''. A container is a finite storage of values. Observers are either ''maps'' (accessing elements by a unique key) or ''streams''
(linear access). An observer is not necessarily bound to a container, e.g. <e>RANDOM</e> stream observes an infinite sequence of pseudo-random numbers. Streams that traverse containers
are called ''iterators''.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies. The first one is a hierarchy of containers and maps. All EiffelBase2 class
names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]
<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ==Model-based contracts==<br /> ===Models
===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents
explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also
introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are
straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags.
Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model
components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a
model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its
model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br />
feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements
<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of
type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence&
lt;/e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e>
note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular
features in their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the
models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define their effects on the model queries of <e>Current</e>
and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not
constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the
model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based
contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item
at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off:
BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v:
G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) +
sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> <
/e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to
represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition
of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts
make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br />
model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition:
bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the
linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</
e>.<br /> <br /> ==Status and roadmap==<br /> <br /> EiffelBase2 is currently being developed as a project at ETH Zurich.<br /> <br /> ===ToDo list===<br /> * Add immutable and mutable strings.<br
/> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Add an iterator over
keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div>
Kenga https://dev.eiffel.com/index.php?title=File:Eb2_iterator.png&diff=14074 2011-03-10T14:19:30Z <p>Kenga: uploaded a new version of "Image:Eb2 iterator.png"</p> <hr /> <div>Class diagram
of the iterator classes in EiffelBase2 library.</div> Kenga https://dev.eiffel.com/index.php?title=File:Eb2_container.png&diff=14073 2011-03-10T14:18:24Z <p>Kenga: uploaded a new version of "
Image:Eb2 container.png"</p> <hr /> <div>Class diagram of the container classes in EiffelBase2 library.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14072
2011-03-10T14:16:42Z <p>Kenga: </p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for
Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel
development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full
contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its
entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary
inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on
external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with
the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the
library is using external iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to
separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish
''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or
''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case
results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but
this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br />
<br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead,
are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps
and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a
complete and clean classification here is impossible (there are too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit
directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a
common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies
(according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of
the developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is related to the corresponding hash function, order, etc. and cannot be modified
arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Add immutable and mutable strings.<br /> * Make model class implementation more efficient.<br /> * Add classes and directories.<br /> * Rewrite
loops using the <e>across</e> where appropriate.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data
structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14012 2010-11-21T13:43:44Z <p>Kenga: /*
Comparison criteria */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br />
<br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as
a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external
iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with
their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining
property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first
case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access
mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the
design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a
''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as
functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter
in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean
classification here is impossible (there are too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <
e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor
for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to
connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the
developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is related to the corresponding hash function, order, etc. and cannot be modified
arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Implement &
lt;e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more
efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14011 2010-11-21T13:43:09Z <p>
Kenga: /* Comparison criteria */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for
Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel
development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full
contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its
entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary
inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on
external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with
the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the
library is using external iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to
separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish
''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or
''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case
results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but
this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br />
<br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead,
are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps
and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a
complete and clean classification here is impossible (there are too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit
directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a
common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies
(according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of
the developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is related the corresponding hash function, order, etc. and cannot be modified
arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Implement &
lt;e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more
efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=14010 2010-11-21T13:31:36Z <p>
Kenga: /* ToDo list */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br />
<br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as
a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external
iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with
their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining
property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first
case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access
mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the
design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a
''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as
functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter
in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean
classification here is impossible (there are too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <
e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor
for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to
connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the
developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is defined by the corresponding hash function, order, etc. and cannot be modified
arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Implement &
lt;e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more
efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13990 2010-09-16T15:18:28Z <p>
Kenga: /* ToDo list */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br />
<br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as
a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external
iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with
their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining
property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first
case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access
mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the
design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a
''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as
functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter
in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean
classification here is impossible (there are too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <
e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor
for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to
connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the
developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is defined by the corresponding hash function, order, etc. and cannot be modified
arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they
can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state
space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index
<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result =
sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain
[index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br />
ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from &
lt;e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binary trees in pre- and postorder.<br />
* Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT&
lt;/e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup
both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13989 2010-09-16T15:17:48Z <p>Kenga: /* ToDo list */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement
for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The
design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but
also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in
particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase,
application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when
useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br />
===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external iterators to traverse containers as
opposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with their measurements and modification
means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by
unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access
mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access mechanism and allows only one instance
of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e&
gt;RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of
values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are
external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called
''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean classification here is impossible (there are
too many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one
exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their
common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of
containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the developers. All EiffelBase2 class names start with <e&
gt;V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The
second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design
decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used in search operations and to define the
uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this
purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br
/> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make
<e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy
(p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve the
problem with sets.<br /> <br /> An additional question (if the comparison criterion is stored in the container) is whether it can be changed during the object's lifetime. A well known practice is
that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table
implementations (hashed, sorted) the comparison criterion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrarily even when the set or table is empty.<br /> <br
/> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the
list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover,
the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter
the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the
same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value.
However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely,
but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==
<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class defines a ''model'' - a mathematical object that
represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or
bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and
thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and
bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model
components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a
model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its
model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br />
feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements
<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of
type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence&
lt;/e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e>
note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular
features in their terms. For queries we define their result (or the model of the result, in case the query returns a fresh object) as a function of the model of <e>Current</e> and (the
models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For commands we define their effects on the model queries of <e>Current</e>
and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not
constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the
model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based
contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item
at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off:
BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v:
G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) +
sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> <
/e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to
represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition
of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts
make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br />
model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition:
bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the
linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query <e>bag</e> in terms of a new model query <e>sequence</
e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and
other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> * Implement <e>
PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the
<e>across</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.
<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/
index.php?title=EiffelBase2&diff=13936 2010-07-29T12:24:29Z <p>Kenga: /* ToDo list */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under
development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which
has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed
to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features
should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br
/> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic
EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that
significantly influenced the architecture of the library is using external iterators to traverse containers as opposed to internal ones available in classic EiffelBase.<br /> In the design inspired
by external iterators we found it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br
/> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either
''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can
exist at the same time; the second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly
inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they
cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood
real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or
remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of
modification, but previous experience shows that a complete and clean classification here is impossible (there are too many kinds of modification, basically one per "concrete" data
structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values
accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the
class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be
included in the library, but are not the first priority of the developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current
document for brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png
|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison
criteria for container elements.<br /> Adjustable comparison criterion is used in search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic
EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an
object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation
(no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e
>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases
when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> An additional question (if the comparison criterion is
stored in the container) is whether it can be changed during the object's lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e
>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison criterion is defined by the corresponding hash
function, order, etc. and cannot be modified arbitrarily even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a
set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever
equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot
be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to
specific search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists
(and iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was
decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of
arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts''
specification method. This method prescribes that each class defines a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as
tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br />
<br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model
Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e&
gt;BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a
mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the
model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e>
clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br />
...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we
declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries
are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model
queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===
Contracts===<br /> The purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in
case the query returns a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the
class invariant. For commands we define their effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is
equivalent to stating that it's not modified. <br /> <br /> The model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through
model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in the model-based contracts approach constrain the values of model queries to make them reflect precisely
the abstract state space of the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br />
model: sequence, index<br /> class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br />
definition: Result = sequence [index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition:
not sequence.domain [index]<br /> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br
/> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br
/> invariant<br /> index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e
>A</e> it is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does
not reuse an <e>A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the
parent's model in terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits
directly from <e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence,
index<br /> class LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result
:= bag [x] = sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old
model query <e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at
ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can
see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e>
to traverse binaries in pre- and postorder.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>across</e> where appropriate.<br /> * Implement <e&
gt;RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient
data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13803 2010-04-24T19:14:32Z <p>Kenga: /*
Model-based contracts */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel.
It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br />
<br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as
a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external
iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with
their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining
property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first
case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access
mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the
design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a
''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as
functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter
in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean
classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <
e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor
for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to
connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the
developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for brevity. <br /> [[Image:eb2_container.png|800px|
thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash function, order, etc. and cannot be modified
arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> ===Models===<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method
prescribes that each class define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined
mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects
in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part
of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e&
gt;INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model
for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model
query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> &
lt;e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class
<e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily
introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e&
gt;index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ===Contracts===<br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. Definitions of zero-argument queries, as usual, can be moved to the class invariant. For
commands we define its effects on the model queries of <e>Current</e> and the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating
that it's not modified. <br /> <br /> Model-based contracts approach does not constrain the way in which you write preconditions: it is not necessary to express them through model queries if they can
be conveniently expressed otherwise.<br /> <br /> Class invariants in model-based contracts approach constrain the values of model queries to make them reflect precisely the abstract state space of
the class.<br /> <br /> Let us add a couple of features and model-based contracts into the class <e>LIST</e> shown above:<br /> <e><br /> note<br /> model: sequence, index<br />
class LIST [G]<br /> <br /> feature -- Access<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence
[index]<br /> end<br /> ...<br /> <br /> feature -- Status report<br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br
/> end<br /> ...<br /> <br /> feature -- Element change<br /> put_right (v: G)<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br />
sequence_effect: sequence |=| old (sequence.front (index).extended (v) + sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> ...<br /> <br /> invariant<br />
index_in_range: 0 <= index and index <= sequence.count + 1<br /> end<br /> </e><br /> <br /> ===Inheritance===<br /> If a class <e>B</e> inherits from <e>A</e> it
is free to choose, whether to reuse each of <e>A</e>'s model queries to represent its own model, as well as introduce new model queries. If <e>B</e> does not reuse an <e&
gt;A</e>'s model query is has to provide a ''linking invariant'': a definition of the old model query in terms of <e>B</e>'s model. Linking invariants explain the parent's model in
terms of the heir's model and thus make sure that the inherited model-based contracts make sense in the heir.<br /> <br /> For example, suppose that <e>LIST</e> inherits directly fro <
e>CONTAINER</e>, whose model is a bag:<br /> <e><br /> note<br /> model: bag<br /> class CONTAINER [G]<br /> ...<br /> end<br /> <br /> note<br /> model: sequence, index<br /> class
LIST [G]<br /> ...<br /> invariant<br /> ...<br /> bag_domain_definition: bag.domain = sequence.range<br /> bag_definition: bag.domain.for all (agent (x: G): BOOLEAN<br /> do Result := bag [x] =
sequence.occurrences (x) end)<br /> end<br /> </e><br /> Here the linking invariant, provided as part of the class invariant in <e>LIST</e> completely defines an old model query &
lt;e>bag</e> in terms of a new model query <e>sequence</e>.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see
the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of
most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <e>
PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a container is modified
through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>accross</e>
where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library
void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&
diff=13802 2010-04-24T18:51:40Z <p>Kenga: /* Model-based contracts */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a
general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years
played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of
correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the
relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful
abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal
mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it
does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the
architecture of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found
it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side
we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a
supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the
second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly
used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active
iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite
structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we
have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous
experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data
structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a
continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of
EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the
library, but are not the first priority of the developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for
brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|
none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for
container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses
the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a
specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to
create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e&
gt;, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the
predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the
container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>
changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash
function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set
or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence
relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed
afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific
search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and
iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was
decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of
arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method.
This method prescribes that each class define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more
predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such
mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library
(MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>
BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a
mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the
model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e>
clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> -- Cursor position<br /> deferred<br /> end<br />
...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> -- Sequence of list elements<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we
declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries
are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model
queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> The
purpose of introducing model queries is to define the postconditions of the regular features in their terms. For queries we define their result (or the model of the result, in case the query returns
a fresh object) as a function of the model of <e>Current</e> and (the models of) the arguments. For commands we define its effects on the model queries of <e>Current</e> and
the arguments. If a model query is not mentioned in the postcondition of a command, it is equivalent to stating that it's not modified. <br /> <br /> Model-based contracts approach does not constrain
the way in which you write preconditions: it is not necessary to express them through model queries if they can be conveniently expressed otherwise.<br /> <br /> Class invariants in model-based
contracts approach constrain the values of model queries to make them reflect precisely the abstract state space.<br /> <br /> Let us add a couple of features and model-based contracts into the class
<e>LIST</e> shown above:<br /> <e><br /> off: BOOLEAN<br /> -- Is cursor off all elements?<br /> deferred<br /> ensure<br /> definition: not sequence.domain [index]<br /> end<br />
<br /> item: G<br /> -- Item at current position<br /> require<br /> not_off: not off<br /> deferred<br /> ensure<br /> definition: Result = sequence [index]<br /> end<br /> <br /> put_right (v: G)
<br /> -- Put `v' to the right of the cursor<br /> require<br /> not_after: not after<br /> deferred<br /> ensure<br /> sequence_effect: sequence |=| old (sequence.front (index).extended (v) +
sequence.tail (index + 1))<br /> index_effect: index = old index<br /> end<br /> </e><br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH
Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a
list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <
e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a container is
modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>accross<
/e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the
library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=
EiffelBase2&diff=13801 2010-04-24T18:29:16Z <p>Kenga: </p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data
structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central
role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br
/> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant
semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction,
unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms,
based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not
conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture
of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural
to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we
distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a
supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the
second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly
used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active
iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite
structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we
have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous
experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data
structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a
continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of
EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the
library, but are not the first priority of the developers. All EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the current document for
brevity. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|
none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for
container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses
the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a
specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to
create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e&
gt;, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the
predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the
container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>
changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash
function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set
or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence
relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed
afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific
search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and
iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was
decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of
arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method.
This method prescribes that each class define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more
predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such
mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library
(MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>
BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a
mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the
model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e>
clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting
of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification
purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We
attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has
started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list=
==<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</
e>.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the
situation when a container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops
using the <e>accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE
</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://
dev.eiffel.com/index.php?title=EiffelBase2&diff=13800 2010-04-24T18:26:22Z <p>Kenga: </p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under
development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which
has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed
to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features
should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br
/> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic
EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that
significantly influenced the architecture of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired
by external iterators we found it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br
/> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either
''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can
exist at the same time; the second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly
inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they
cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood
real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or
remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of
modification, but previous experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data
structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e><ref>EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the
prefix is omitted in the document.</ref>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves
as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies
(according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of
the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px
|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria
for container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase
uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a
specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to
create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e&
gt;, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the
predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the
container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>
changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash
function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set
or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence
relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed
afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific
search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and
iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was
decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of
arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method.
This method prescribes that each class define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more
predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such
mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library
(MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>
BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a
mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the
model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e>
clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> deferred<br /> end<br /> ...<br /> feature --
Specification<br /> sequence: MML_SEQUENCE [G]<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting
of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification
purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We
attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has
started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list=
==<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</
e>.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the
situation when a container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops
using the <e>accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE
</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.<br /> <br /> <references/
></div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13799 2010-04-24T18:25:53Z <p>Kenga: /* Traversal mechanisms and classification criteria */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement
for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The
design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but
also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in
particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase,
application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when
useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br />
===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external iterators to traverse containers as
apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with their measurements and modification
means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by
unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access
mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access mechanism and allows only one instance
of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e&
gt;RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of
values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are
external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called
''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean classification here is impossible (there are
two many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e><ref>
EiffelBase2 class names start with <e>V_</e> (for ''Verified''), but the prefix is omitted in the document.</ref>. There is one exception: <e>SEQUENCE</e>, which
represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br
/> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals
stand for classes that might be included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br />
The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important
design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used is search operations and to
define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for
this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.
<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to
make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>
satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve
the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the container) is whether it can be changed during the object lifetime. A well known practice is that
in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table
implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br
/> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the
list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover,
the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter
the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the
same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value.
However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely,
but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==
<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class define a ''model'' - a mathematical object that represents explicitly
its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a
special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward
translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and
integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally, arbitrary reference classes can be used as model components to
denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair consisting of a sequence and an integer is a model for a list
with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of the class in the source code by listing its model queries under
the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model: sequence, index<br /> class LIST [G]<br /> feature -- Access<br />
index: INTEGER<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> note<br /> status: specification<br /> ...<br /> end<br /> </e><br />
Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>MML_SEQUENCE</e> and an integer index. As you can see, model
queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>); regular queries meant to be called from the code can be also reused
as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a query to indicate that its primary purpose is specification.<br /> <br
/> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material
will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br
/> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in
pre- and postorder.<br /> * Iterator management: currently the situation when a container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> *
Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT<
/e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup
both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13798 2010-04-24T18:18:54Z <p>Kenga: /* Model-based contracts */</p> <hr /> <div>
[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement
for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The
design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but
also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in
particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase,
application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when
useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br />
===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external iterators to traverse containers as
apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with their measurements and modification
means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by
unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access
mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access mechanism and allows only one instance
of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e&
gt;RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of
values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are
external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called
''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean classification here is impossible (there are
two many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one
exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their
common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of
containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|
none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===
Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison
criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e>
attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to
implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not
serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p:
PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but
duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the container) is whether it can be
changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e>
query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash function, order, etc. and cannot be modified
arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for
search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>
is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search
operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===
Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas
for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays
from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data
structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> EiffelBase2 is specified using the ''model-based contracts'' specification method. This method prescribes that each class
define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a class is expressed as tuple of one or more predefined mathematical objects: booleans,
integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs (references).<br /> <br /> Such mathematical objects in the program are represented by
''model classes'', which are immutable and thus are straightforward translations of mathematical definitions. The Mathematical Model Library (MML), which is a part of EiffelBase2 contains model
classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <e>BOOLEAN</e> and <e>INTEGER</e>. Finally,
arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> For example, a mathematical sequence is a model for a stack or a queue. A pair
consisting of a sequence and an integer is a model for a list with an internal cursor.<br /> <br /> The value of each component of the model is defined by a ''model query''. You define the model of
the class in the source code by listing its model queries under the tag <e>model</e> in the class's <e>note</e> clause. For example:<br /> <e><br /> note<br /> model:
sequence, index<br /> class LIST [G]<br /> feature -- Access<br /> index: INTEGER<br /> deferred<br /> end<br /> ...<br /> feature -- Specification<br /> sequence: MML_SEQUENCE [G]<br /> note<br />
status: specification<br /> ...<br /> end<br /> </e><br /> Here we declared the model of class <e>LIST</e> consisting of two components: a mathematical sequence of type <e>
MML_SEQUENCE</e> and an integer index. As you can see, model queries are not necessarily introduced specifically for specification purposes (as is the case with <e>sequence</e>);
regular queries meant to be called from the code can be also reused as model queries (as <e>index</e> in this example). We attach a <e>status: specification</e> note to a
query to indicate that its primary purpose is specification.<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://
eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of most important
changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <e>PREORDER_ITERATOR&
lt;/e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a container is modified through one iterator
while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>accross</e> where appropriate.<br
/> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> *
Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13797
2010-04-24T17:58:38Z <p>Kenga: /* Model-based contracts */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose
data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a
central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of
correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the
relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful
abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal
mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it
does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the
architecture of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found
it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side
we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a
supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the
second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly
used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active
iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite
structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we
have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous
experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data
structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a
continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of
EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the
library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and
iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is
how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and tables
(uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another
approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents
to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>
search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>.
This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional
question (if the comparison criterion is stored in the container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set
is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison
critearion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat
differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in
EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument
to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't
pertain to the container itself, but to specific search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the
classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of
Software Engineering (17.03.2010) is was decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be
changed in future so that indices of arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==Model-based contracts==<br /> EiffelBase2 is specified using the
''model-based contracts'' specification method. This method prescribes that each class define a ''model'' - a mathematical object that represents explicitly its abstract state space. The model of a
class is expressed as tuple of one or more predefined mathematical objects: booleans, integers, sets, relations, maps, sequences or bags. We also introduce a special mathematical sort for object IDs
(references).<br /> <br /> Such mathematical objects in the program are represented by ''model classes'', which are immutable and thus are straightforward translations of mathematical definitions.
The MML library, which is a part of EiffelBase2 contains model classes for sets, relations, maps, sequences and bags. Boolean and integer components of models are represented by standard classes <
c>BOOLEAN</c> and <c>INTEGER</c>. Finally, arbitrary reference classes can be used as model components to denote the mathematical sort of references.<br /> <br /> ==Status and
roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be
transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> *
Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre-
and postorder.<br /> * Iterator management: currently the situation when a container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the
implementation of the MML library.<br /> * Rewrite loops using the <e>accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e>
streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways,
heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13795 2010-04-23T16:17:49Z <p>Kenga: /* Indexing */</p> <hr /> <div>[[Category:Library]]<br /> <br
/> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library
("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2
are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and
documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of
descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic
classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br />
*Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal
mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external iterators to traverse containers as apposed to
internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with their measurements and modification means) from
interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by unique keys&
quot;) and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access mechanism and
is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access mechanism and allows only one instance of an observer
per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e&
gt; it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers
are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are external to a program.
Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br
/> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean classification here is impossible (there are two many kinds of
modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>
SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and
replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps.
Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class
hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br
/> Another important design decision for a data structures library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used is search
operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>
CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary
equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> #
Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</
e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features
and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the container) is whether it can be changed during the object lifetime. A well
known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most
set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrary even when the set or table is
empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is
taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library
classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because
it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific search operations.<br /> <br /> ===Indexing===<br /> The current version of
EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and iterators) always starts from 1, whereas for arrays the starting index can be set
to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was decided that the possibility to start arrays from an arbitrary index is not crucial and
is used very rarely, but complicates the API. Thus it will probably be changed in future so that indices of arrays, like those of other indexed data structures, always start at 1.<br /> <br /> ==
Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page].
Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br /> Below you can see a list of most important changes planned for EiffelBase2.<br /> *
Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br /> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR
</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a container is modified through one iterator while it's being traversed with another
is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e>accross</e> where appropriate.<br /> * Implement <e>RANDOM</e>
and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br /> * Make the library void-safe.<br /> * Implement more efficient data structures:
e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13794 2010-04-23T16:16:41Z <p>Kenga: /* Design overview */
</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the
future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals
==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the
verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and
consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in
Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors
also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==
Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that significantly influenced the architecture of the library is using external iterators to
traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate ''containers'' (with their
measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps'' (with the defining property
"accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or ''inherited'' by it. The first case
produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case results in an internal access mechanism
and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but this is not enforced by the design. For
infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the other side, a
''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually represented as
functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and iterators (the latter
in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete and clean
classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly from <
e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor
for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to
connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the
developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|
thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures library is how to represent and modify comparison criteria
for container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and tables (uniqueness of keys).<br /> # Classic EiffelBase
uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses another approach: storing equivalence as an object of a
specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use agents to store the equalivalence relation (no need to
create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e>search (x: G)</e> always use <e>=</e&
gt;, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e>. This is more flexible (better fits cases when the
predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional question (if the comparison criterion is stored in the
container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set is not empty. This gives rise to <e>
changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison critearion is defined by the corresponding hash
function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat differently comparison criteria on which a set
or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in EiffelBase, as we can use whatever equivalence
relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument to the creation procedure and cannot be changed
afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't pertain to the container itself, but to specific
search operations.<br /> <br /> ===Indexing===<br /> The current version of EiffelBase2 is using the same policy for indexing as the one used in the classic EiffelBase: indexing of lists (and
iterators) always starts from 1, whereas for arrays the starting index can be set to an arbitrary value. However, during the code review in the Chair of Software Engineering (17.03.2010) is was
decided that the possibility to start arrays from an arbitrary index is not crucial and is used very rarely, but complicates the API. Thus it will probably be changed in future so that array
indexing, like in other indexed data structures, always starts at 1.<br /> <br /> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a
project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br />
Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br
/> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a
container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e&
gt;accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br
/> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/
index.php?title=EiffelBase2&diff=13793 2010-04-23T16:03:37Z <p>Kenga: /* Status and roadmap */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under
development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which
has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed
to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features
should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br
/> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic
EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that
significantly influenced the architecture of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired
by external iterators we found it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br
/> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either
''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can
exist at the same time; the second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly
inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they
cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood
real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or
remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of
modification, but previous experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data
structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values
accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the
class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be
included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of
streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures
library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and
tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses
another approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use
agents to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e&
gt;search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e&
gt;. This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional
question (if the comparison criterion is stored in the container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set
is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison
critearion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat
differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in
EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument
to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't
pertain to the container itself, but to specific search operations.<br /> <br /> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a
project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.<br /> <br /> ===ToDo list===<br />
Below you can see a list of most important changes planned for EiffelBase2.<br /> * Test extensively with AutoTest.<br /> * Implement <e>HASH_SET</e> and <e>HASH_TABLE</e>.<br
/> * Implement <e>PREORDER_ITERATOR</e> and <e>POSTORDER_ITERATOR</e> to traverse binaries in pre- and postorder.<br /> * Iterator management: currently the situation when a
container is modified through one iterator while it's being traversed with another is not handled anyhow.<br /> * Finish the implementation of the MML library.<br /> * Rewrite loops using the <e&
gt;accross</e> where appropriate.<br /> * Implement <e>RANDOM</e> and <e>STANDARD_INPUT</e> streams.<br /> * Add an iterator over keys for <e>TABLE</e>.<br
/> * Make the library void-safe.<br /> * Implement more efficient data structures: e.g., tables with fast lookup both ways, heaps, skip lists, treaps, etc.</div> Kenga https://dev.eiffel.com/
index.php?title=EiffelBase2&diff=13792 2010-04-23T14:48:09Z <p>Kenga: /* Design overview */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under
development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which
has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed
to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features
should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br
/> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic
EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> ===Traversal mechanisms and classification criteria===<br /> A design decision that
significantly influenced the architecture of the library is using external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired
by external iterators we found it natural to separate ''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br
/> <br /> On the observer side we distinguish ''maps'' (with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either
''used'' by a container (as a supplier) or ''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can
exist at the same time; the second case results in an internal access mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly
inherited and iterators are mostly used, but this is not enforced by the design. For infinite sequences like <e>RANDOM</e> it makes sense to ''inherit'' from an iterator, because they
cannot have more than one active iterator.<br /> <br /> On the other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood
real, physical storage. Infinite structures, instead, are usually represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or
remove) and for this purpose we have observers: maps and iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of
modification, but previous experience shows that a complete and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data
structure). So concrete data structures mostly just inherit directly from <e>CONTAINER</e>. There is one exception: <e>SEQUENCE</e>, which represents a sequence of values
accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors out their common search and replacement mechanisms.<br /> <br /> Below you can find the
class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be
included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of
streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br /> ===Comparison criteria===<br /> Another important design decision for a data structures
library is how to represent and modify comparison criteria for container elements.<br /> Adjustable comparison criterion is used is search operations and to define the uniqueness property in sets and
tables (uniqueness of keys).<br /> # Classic EiffelBase uses the boolean <e>object_comparison</e> attribute in <e>CONTAINER</e> for this purpose.<br /> # Gobo.structures uses
another approach: storing equivalence as an object of a specific class, which can be redefined by the user to implement an arbitrary equalivalence relation.<br /> # A similar approach would be to use
agents to store the equalivalence relation (no need to create classes). The downside is that agents do not serialize well.<br /> # Another approach is to make <e>has (x: G)</e> and <e&
gt;search (x: G)</e> always use <e>=</e>, but also introduce <e>exists (p: PREDICATE [ANY, TUPLE [G]])</e> and <e>satisfy (p: PREDICATE [ANY, TUPLE [G]])</e&
gt;. This is more flexible (better fits cases when the predicate is not an equivalence), but duplicates the number of search features and doesn't solve the problem with sets.<br /> <br /> Additional
question (if the comparison criterion is stored in the container) is whether it can be changed during the object lifetime. A well known practice is that in sets it is not allowed to change if the set
is not empty. This gives rise to <e>changeable_comparison_criterion</e> query in CONTAINER.<br /> Note that for most set and table implementations (hashed, sorted) the comparison
critearion is defined by the corresponding hash function, order, etc. and cannot be modified arbitrary even when the set or table is empty.<br /> <br /> The strategy chosen in EiffelBase2 is to treat
differently comparison criteria on which a set or a table is based and ones just used for search. For the first the Gobo approach is taken (2 in the list above), because it is more flexible than in
EiffelBase, as we can use whatever equivalence relation we want instead of just <e>is_equal</e> (useful for library classes). Moreover, the equivalence relation is passed as an argument
to the creation procedure and cannot be changed afterward. <br /> <br /> For search operations the approach 4 is taken, because it doesn't clutter the container state with properties that don't
pertain to the container itself, but to specific search operations.<br /> <br /> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a
project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.</div> Kenga https://dev.eiffel.com/
index.php?title=EiffelBase2&diff=13791 2010-04-23T13:19:52Z <p>Kenga: /* Design overview */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under
development, is a general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which
has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed
to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features
should describe the relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not
representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br
/> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic
EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> A design decision that significantly influenced the architecture of the library is using
external iterators to traverse containers as apposed to internal ones available in classic EiffelBase.<br /> In the design inspired by external iterators we found it natural to separate
''containers'' (with their measurements and modification means) from interfaces to access elements of containers, here called ''observers''.<br /> <br /> On the observer side we distinguish ''maps''
(with the defining property "accessing elements by unique keys") and iterators (providing linear traversal). An observer can be either ''used'' by a container (as a supplier) or
''inherited'' by it. The first case produces an external access mechanism and is useful when multiple instances of an observer for the same container can exist at the same time; the second case
results in an internal acess mechanism and allows only one instance of an observer per container at a time. In the rest of the library maps are mostly inherited and iterators are mostly used, but
this is not enforced by the design. For infinite sequences like RANDOM it makes sense to ''inherit'' from an iterator, because they cannot have more than one active iterator.<br /> <br /> On the
other side, a ''container'' is a finite storage of values. Containers are deliberately confined to finite structures and understood real, physical storage. Infinite structures, instead, are usually
represented as functions or mechanisms that are external to a program. Most of the time we can only access their elements (not add or remove) and for this purpose we have observers: maps and
iterators (the latter in the infinite case are called ''streams'').<br /> <br /> Containers may be classified based on different means of modification, but previous experience shows that a complete
and clean classification here is impossible (there are two many kinds of modification, basically one per "concrete" data structure). So concrete data structures mostly just inherit directly
from CONTAINER. There is one exception: SEQUENCE, which represents a sequence of values accessible by indices from a continuous interval. It serves as a common ancestor for ARRAY and LIST and factors
out their common search and replacement mechanisms.<br /> <br /> Below you can find the class diagram of EiffelBase2, split into two hierarchies (according to connectedness). The first one is a
hierarchy of containers and maps. Note: dash-bordered ovals stand for classes that might be included in the library, but are not the first priority of the developers. <br /> [[Image:eb2_container.png
|800px|thumb|none|Container class hierarchy]]<br /> <br /> The second one is a hierarchy of streams and iterators.<br /> [[Image:eb2_iterator.png|800px|thumb|none|Iterator class hierarchy]]<br /> <br
/> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project
page]. Documentation and other material will soon be transferred to the present page.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13790 2010-04-23T12:51:43Z <p>Kenga: </p> <hr
/> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is intended as the future
replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br /> ==Design goals==<br
/> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a result of the
verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br /> *Simple and
consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br /> <br /> *As in
Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with internal cursors
also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br /> <br /> ==
Design overview==<br /> [[Image:eb2_container.png|800px|frameless|left|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|800px|frameless|left|Iterator class hierarchy]]<br /> <br /> =
=Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page].
Documentation and other material will soon be transferred to the present page.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13789 2010-04-23T12:49:05Z <p>Kenga: /* Design
overview */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is
intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br
/> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a
result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> [[Image:eb2_container.png|800px|thumb|left|Container class hierarchy]]<br /> <br /> [[Image:eb2_iterator.png|800px|thumb|left|Iterator class hierarchy]]<br /> <br
/> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project
page]. Documentation and other material will soon be transferred to the present page.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&diff=13788 2010-04-23T12:48:34Z <p>Kenga: /*
Design overview */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a general-purpose data structures library for Eiffel. It is
intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years played a central role in Eiffel development. <br /> <br
/> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of correctness.<br /> <br /> *Full contracts. Partly as a
result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the relevant semantics in its entirety.<br /> <br />
*Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful abstraction, unnecessary inheritance links).<br />
<br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal mechanisms, based on external cursors (with
internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it does not conflict with the preceding goals.<br
/> <br /> ==Design overview==<br /> [[Image:eb2_container.png|800px|thumb|left|Container class hierarchy]]<br /> [[Image:eb2_iterator.png|800px|thumb|left|Iterator class hierarchy]]<br /> <br /> ==
Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://eiffelbase2.origo.ethz.ch/ project page].
Documentation and other material will soon be transferred to the present page.</div> Kenga https://dev.eiffel.com/index.php?title=File:Eb2_iterator.png&diff=13787 2010-04-23T12:47:03Z <p>Kenga: Class
diagram of the iterator classes in EiffelBase2 library.</p> <hr /> <div>Class diagram of the iterator classes in EiffelBase2 library.</div> Kenga https://dev.eiffel.com/index.php?title=EiffelBase2&
diff=13786 2010-04-23T12:46:32Z <p>Kenga: /* Design overview */</p> <hr /> <div>[[Category:Library]]<br /> <br /> ==Overview==<br /> <br /> EiffelBase2, currently under development, is a
general-purpose data structures library for Eiffel. It is intended as the future replacement for the [[EiffelBase]] library ("Classic EiffelBase" in this document), which has for many years
played a central role in Eiffel development. <br /> <br /> ==Design goals==<br /> <br /> The design goals for EiffelBase2 are:<br /> <br /> *Verifiability. The library is designed to allow proofs of
correctness.<br /> <br /> *Full contracts. Partly as a result of the verifiability goal, but also for clarity and documentation, the contracts associated with classes and features should describe the
relevant semantics in its entirety.<br /> <br /> *Simple and consistent hierarchy, in particular avoidance of descendant hiding and of "taxomania" (classes not representing a meaningful
abstraction, unnecessary inheritance links).<br /> <br /> *As in Classic EiffelBase, application of a systematic classification (a theory) of fundamental data structures.<br /> <br /> *Full traversal
mechanisms, based on external cursors (with internal cursors also provided when useful).<br /> <br /> *Client-interface compatibility with corresponding classes in Classic EiffelBase, whenever it
does not conflict with the preceding goals.<br /> <br /> ==Design overview==<br /> [[Image:eb2_container.png|left|Container class hierarchy]]<br /> [[Image:eb2_iterator.png|left|Iterator class
hierarchy]]<br /> <br /> ==Model-based contracts==<br /> <br /> ==Status and roadmap==<br /> <br /> Development of EiffelBase has started as a project at ETH Zurich; see the [http://
eiffelbase2.origo.ethz.ch/ project page]. Documentation and other material will soon be transferred to the present page.</div> Kenga
|
{"url":"https://dev.eiffel.com/api.php?action=feedcontributions&user=Kenga&feedformat=atom","timestamp":"2024-11-02T12:39:08Z","content_type":"application/atom+xml","content_length":"622216","record_id":"<urn:uuid:a0db3c4b-7f9f-499d-b945-79a25a97641d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00288.warc.gz"}
|
Here are explained some usefull terms.
PD code and EM code¶
EM code (Ewing-Millett code) and PD code (planar diagram code) are methods of descirbing knots in detail. To find EM code of your knot diagram:
• Give each crossing a number,
• Give each crossing a sign according to,
• For each crossing:
□ Name “a” direction of outgoing overpassing arc,
□ Name every other direction in a clockwise order “b”, “c” and “d”,
□ In this way, every arc – by which we mean here continuous piece of chain between two neighbour crossings – consists of two ends, each described by a number and a letter.
• A code for a crossing consists of its number, its sign, and four two-character descriptions of opposite ends of four arcs coming out of the crossing (in order “a”, “b”, “c”, “d”).
• A code for a structure consists of a list of codes for crossings.
Finding PD code is easier:
• Go along the structure according to its orientation and after each crossing asign next number to it (starting from 1).
• Each crossing is described by “X” symbol with numbers of its neighbouring arcs: counter-clockwise starting from ingoing underpassing.
• Structure code is described by a list of its crossings.
In case of spatial graphs (theta-curves, handcuffs) PD code can be extended. In such a case every vertex connected to three arcs is described by “V” symbol with numbers of its neighbouring arcs in
any order.
Reidemeister moves¶
Set of basic moves that change knot diagram but doesn’t alter knot topology.
KMT algorithm¶
Based on Koniaris’s and Muthukamar’s method. This algorithm analyzes all triangles in a chain made by three consecutive points, and removes the middle point in case a given triangle is not
intersected by any other segment of the chain. In effect, after a number of iterations, the initial chain is replaced by (much) shorter and simpler chain of the same topological type.
|
{"url":"https://topoly.cent.uw.edu.pl/dictionary.html","timestamp":"2024-11-12T00:32:02Z","content_type":"text/html","content_length":"11773","record_id":"<urn:uuid:ce47449d-63d1-4142-b4a2-08f634103dc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00737.warc.gz"}
|
Monad; that’s a wrap!
Just like everybody else who starts to look at monads, I found it was like coming to the face of a sheer cliff. Let me qualify that: just like every other programmer who is not a mathematician (and
that’s most of us). I am looking at monads in the context of clojure, so code snippets will generally be written in clojure-like syntax.
I have found these resources the most helpful:
Adam’s talk eschews category theory, and mathematics in general, concentrating on how to define and use monads in code. This is an essential approach for the rest of us. Unfortunately, all of the
terms used in discussing monads hark back to the mathematics, and that, I believe, makes the practical use of monads more confusing than it need be. This point was brought home to me strongly when,
after observing what some of the others were saying, I watched the first part of a lecture by this extraordinary woman.
Warning: take everything I say with a large grain of salt. I am writing this to help me to sort it out in my own mind, and there are no guarantees that any of it is correct.
In all of the discussions I have seen, it is stressed that monads are defined in terms of the behaviour of two functions.
Although it is not usually mentioned first, I will start with the function that I will call wrap. Nobody else calls it that, but that’s what it does. It goes by a number of aliases:
• result, m_result, mResult, etc.
• return, m_return, etc.
• lift, m_lift, etc.
• the monadic function (? Sometimes. However, the 2nd argument to rewrap —see below— is generally called monadic function, so it may be more accurate to describe wrap as the base function on which
all other monadic functions of a given monad are built.)
When he discusses the Array Monad for Javascript, Santosh Rajan gives the following signature for the monadic function, i.e. wrap.
f: T -> [T]
That is, for the Array Monad, wrap takes a value of some type T, and wraps it in an array. Both the type of value and the mode of wrapping are specific to each defined monad, but once defined, they
are fixed for that monad.
The resulting, wrapped, value returned from wrap is known as a monadic value, and is often represented in code as mv. In this discussion, I’ll call it wrapt, for obvious reasons. I’ll call the
argument to wrap the basic value, or bv.
;; Takes a basic value; returns a wrapt value
(defn mymonad-wrap [bv]
(let [wrapt (do-the-wrapping-of bv)]
You can see why monadic values are frequently called containers.
unwrap is under wraps, so to speak, because it is not part of the public face of monads. It is NOT one of the two functions which define the behaviour of a monad. It is, however, essential to the
functioning of monads, and some kind of unwrap functionality must be available to the other function in the definition of monad: rewrap.
unwrap is the inverse of wrap, not surprisingly. In terms of Santosh’s Array monad, it’s signature might look like this.
f: T <- [T]
That is, unwrap takes a wrapt (monadic value), which is a wrapped basic value and returns the basic value.
This the function that is generally called bind, m_bind, etc., although in Santosh’s examples, this function takes the name of the monad; for example, arrayMonad. The signature that Santosh gives
for this function in the Array Monad is
M: [T] -> [T]
That is, it transforms a wrapt value to another wrapt value. In the case of the Array monad, the basic value is wrapped in an array.
rewrap looks like this.
(defn mymonad-rewrap [wrapt, mf]
;; call monadic function given as 2nd arg, on the
;; basic value extracted from the wrapt value given
;; in the 1st argument. Return the new wrapt value.
(let [bv (do-unwrap-fn-on wrapt)
new-wrapt (mf bv)]
So, what rewrap does is
• unwrap its monadic value argument to get the basic value
• call its monadic function argument with the unwrapped basic value to…
□ modify the basic value
□ wrap the modified basic value and return a new wrapt value
The monadic function argument, mf, deserves a closer look. mf operates on a basic value to produce a new wrapt value. It is, in fact, a composition of functions. It composes wrap and some
operation that modifies the basic value. So,
mf ⇒ (f′ ⋅ wrap)
where f′ is a function that modifies the basic value. In that scheme, wrap itself can be described as
wrap′ ⇒ (identity ⋅ wrap)
That given, we can now describe rewrap as
(defn rewrap [wrapt, (f′ ⋅ wrap)]
(let [bv (unwrap wrapt)
new-wrapt (wrap (f′ bv))]
or, equivalently,
(defn rewrap [wrapt, (f′ ⋅ wrap)]
(wrap (f′ (unwrap wrapt))))
The 3 R’s
Monads must obey three rules. These rules I have taken from Jim Duey’s post, with the appropriate translation to the “wrap” terminology I’m using here.
Rule 1
(rewrap (wrap x) f) ≡ (f x)
Alternatively, given our rewriting of rewrap, above, but using f rather than (f′ ⋅ wrap);
(f (unwrap (wrap x)) ≡ (f x)
⇒ (f x) ≡ (f x)
Rule 2
(rewrap wrapt wrap) ≡ wrapt
⇒ (wrap (unwrap wrapt)) ≡ wrapt
⇒ wrapt ≡ wrapt
Rule 3
(rewrap (rewrap wrapt f) g) ≡
(rewrap wrapt (fn [x] (rewrap (f x) g)))
LHS ⇒ (rewrap (f (unwrap wrapt)) g)
⇒ (g (unwrap (f (unwrap wrapt))))
⇒ (g (unwrap (f x))) [1]
⇒ (g (unwrap (wrap (f′ x))))
⇒ (g (f′ x))
RHS ⇒ (rewrap wrapt (fn [x] (g (unwrap (f x)))))
⇒ ((fn [x] (g (unwrap (f x)))) (unwrap wrapt))
⇒ ((fn [x] (g (unwrap (f x)))) x) [2]
Everything looks pretty straightforward, except for the correspondence between [1] and [2] in Rule 3. Something seems odd about it, even though the results will be the same.
One thought on “Monad; that’s a wrap!”
1. That is looks so complicated
|
{"url":"https://pbw.id.au/blog/2015/02/monad-thats-a-wrap/","timestamp":"2024-11-02T18:44:12Z","content_type":"text/html","content_length":"69330","record_id":"<urn:uuid:781c670a-ede4-450b-950b-0dd2395d9dd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00314.warc.gz"}
|
Divide by Zero error
I am using the formula below to return the average value of a column. I chose not to use the standard AVG formula because it was considering blank cells and cells with zeros skewing the true average
total. However, the formula below is returning DVIVD BY ZERO. I tried adding IFERROR and got incorrect argument. Where did I go wrong here?
=(SUM({Year_1})) / ((COUNT({Year_1})) - (COUNTIF({Year_1}, 0)))
Thank you,
Best Answers
• Hi @LeAndre P
The error indicates that the second half of your formula returns 0. To troubleshoot this, I would try out each COUNT formula separately to see what it returns:
=COUNTIF({Year_1}, 0)
If you're getting the same number, then subtracting one from the other would result in 0, which you cannot divide by. If you shouldn't be getting the same number, can you post a screen capture of
your {Year_1} column (but please block out sensitive data).
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi @LeAndre P
I'm glad you were able to solve it! Just a note... although this should work fine, you don't need quite so many parentheses. It may be easier to troubleshoot down the line if you remove out the
=IF(COUNT({Year_1}) = COUNTIF({Year_1}, 0), 0, SUM({Year_1}) / (COUNT({Year_1}) - COUNTIF({Year_1}, 0)))
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi @LeAndre P
The error indicates that the second half of your formula returns 0. To troubleshoot this, I would try out each COUNT formula separately to see what it returns:
=COUNTIF({Year_1}, 0)
If you're getting the same number, then subtracting one from the other would result in 0, which you cannot divide by. If you shouldn't be getting the same number, can you post a screen capture of
your {Year_1} column (but please block out sensitive data).
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Hi Genevieve,
Thank for assisting. I think I managed to solve this one by using the following formula.
=IF((COUNT({Year_1})) = (COUNTIF({Year_1}, 0)), 0, SUM({Year_1}) / ((COUNT({Year_1})) - (COUNTIF({Year_1}, 0))))
• Hi @LeAndre P
I'm glad you were able to solve it! Just a note... although this should work fine, you don't need quite so many parentheses. It may be easier to troubleshoot down the line if you remove out the
=IF(COUNT({Year_1}) = COUNTIF({Year_1}, 0), 0, SUM({Year_1}) / (COUNT({Year_1}) - COUNTIF({Year_1}, 0)))
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/92654/divide-by-zero-error","timestamp":"2024-11-07T15:18:44Z","content_type":"text/html","content_length":"411648","record_id":"<urn:uuid:95e1ec53-214a-4e43-b393-6ad3c2ff4810>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00354.warc.gz"}
|
I was wondering if we will be expected to find the characteristic equations of higher-order equations (3rd,4th, ..., nth), or will we be provided the characteristic equation in the exam.
Yes, because for equations given they could be found easily
If any of the $a_i$'s are not constant, then we cannot use the method above. Non-constant coefficient differential equations are generally harder to solve. We discussed a few methods in class such as
reduction of order or using the Wronskian, but both methods require already knowing one solution.
|
{"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=h66toj3s05i9qa9s9cd1pmojr6&topic=2248.msg6848","timestamp":"2024-11-06T23:59:58Z","content_type":"application/xhtml+xml","content_length":"30742","record_id":"<urn:uuid:ab9700eb-a459-48c3-9f55-a9a8c21ab8cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00518.warc.gz"}
|
Comparison of Classical and Bayesian Statistics
Achshah R M
Classical (or frequentist) and Bayesian statistics are the two main branches of statistical inference, but they have fundamentally different interpretations of what probabilities represent and how
they should be used. This leads to differences in the application and interpretation of their methods.
1. Interpretation of Probability:
• Classical Statistics: In the frequentist viewpoint, probabilities represent long-term frequencies of events. For example, if a fair coin is tossed repeatedly, we expect it to land on heads about
50% of the time. Here, probabilities are objective properties of the real world and are not associated with any degree of belief or knowledge about an event.
• Bayesian Statistics: In contrast, Bayesian statistics interprets probabilities as degrees of belief. For example, if a Bayesian says there is a 50% probability of rain tomorrow, it means they
believe that the chance of rain is equivalent to the chance of a fair coin landing on heads - it's a subjective belief about a single event, not a long-term frequency.
2. Use of Prior Information:
• Classical Statistics: Classical methods do not incorporate prior information about unknown parameters. They use only the data at hand to make inferences about the population. This can be
beneficial because it avoids the potential subjectivity introduced by the choice of prior.
• Bayesian Statistics: Bayesian methods, on the other hand, combine prior information with the data to form a posterior distribution. This allows for the integration of prior knowledge or beliefs
into the analysis, which can be particularly helpful in cases of limited data.
3. Hypothesis Testing and Confidence Intervals:
• Classical Statistics: Classical statistics typically uses null hypothesis significance testing (NHST) and p-values for hypothesis testing. In this framework, we calculate the probability of
obtaining the observed data (or more extreme data) given that a null hypothesis is true. Additionally, a 95% confidence interval means that if we were to repeat the experiment many times, 95% of
the calculated intervals would contain the true population parameter.
• Bayesian Statistics: Bayesian methods provide direct probabilities about hypotheses or parameters via the posterior distribution. For example, a 95% Bayesian credible interval can be interpreted
as the interval within which the parameter lies with 95% probability. This is often considered more intuitive than the frequentist interpretation of confidence intervals.
4. Computational Complexity:
• Classical Statistics: Classical methods often involve simpler calculations and can be less computationally intensive than Bayesian methods.
• Bayesian Statistics: Bayesian methods, especially for complex models, often require sophisticated computational techniques like Markov Chain Monte Carlo (MCMC). However, the advent of powerful
computers and algorithms has made these computations increasingly feasible.
Both frameworks have their strengths and weaknesses and are useful in different contexts. The choice between classical and Bayesian statistics should be based on the specific requirements of the
analysis, including the nature of the problem, the available data, and the practical implications of the results.
Practical Example of Classical and Bayesian Statistics
Assume we have data for the past 50 years and we are interested in predicting whether or not it will rain tomorrow.
Let's say we are interested in the proportion of rainy days. We would calculate the sample proportion of rainy days (let's say it's 0.3 or 30% in our dataset) and create a confidence interval for
this proportion. We might use a method like bootstrapping or a formula-based method (like the one for a binomial proportion) to construct this interval.
Suppose our 95% confidence interval for the population proportion of rainy days is (0.25, 0.35). In the classical framework, we would interpret this as: "If we were to collect many samples and
construct a confidence interval from each sample, about 95% of these intervals would contain the true proportion of rainy days."
Note that in this classical framework, we do not incorporate any prior beliefs we might have about the proportion of rainy days, and we do not make a probabilistic statement about tomorrow's weather.
In the Bayesian framework, we would first specify a prior distribution that represents our beliefs about the proportion of rainy days before seeing the data. Suppose we believe that all proportions
are equally likely, so we specify a uniform prior distribution.
After observing the data, we update our beliefs to obtain a posterior distribution for the proportion of rainy days. Suppose our posterior distribution is a Beta distribution (which is the conjugate
prior for a binomial likelihood) with parameters that were updated based on the data.
We could then use this posterior distribution to make a probabilistic statement about tomorrow's weather. For example, we could find the probability that tomorrow is a rainy day as the posterior mean
(let's say it's 0.31 or 31%). We could also create a 95% credible interval, let's say it's (0.26, 0.36). We would interpret this as "given the data, we believe that the true proportion of rainy days
is between 26% and 36% with a 95% probability".
Notice that in this Bayesian framework, we incorporated our prior beliefs (albeit vague in this case), updated these beliefs based on the data, and made a probabilistic statement about a future
While both methods gave us similar estimates and intervals, the interpretations are quite different. The classical method gave us a range of values that would contain the true proportion in a long
series of repetitions of the same sampling procedure, while the Bayesian method gave us a range of values that we believe contains the true proportion with a certain probability.
Moreover, the Bayesian method allowed us to make a probabilistic statement about a future event (tomorrow's weather), which was not straightforward in the classical framework.
Thus, while the computations might be similar, the philosophical differences between classical and Bayesian statistics led to different interpretations and different types of conclusions.
|
{"url":"https://www.effyies.com/post/comparison-of-classical-and-bayesian-statistics","timestamp":"2024-11-03T07:19:46Z","content_type":"text/html","content_length":"1050486","record_id":"<urn:uuid:289800c1-c2d1-40fd-90fb-84e5ec676df4>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00379.warc.gz"}
|
Equilibrium in misspecified Markov decision processes
Theoretical Economics 16 (2021), 717–757
Equilibrium in misspecified Markov decision processes
Ignacio Esponda, Demian Pouzo
We provide an equilibrium framework for modeling the behavior of an agent who holds a simplified view of a dynamic optimization problem. The agent faces a Markov Decision Process, where a transition
probability function determines the evolution of a state variable as a function of the previous state and the agent’s action. The agent is uncertain about the true transition function and has a prior
over a set of possible transition functions; this set reflects the agent’s (possibly simplified) view of her environment and may not contain the true function. We define an equilibrium concept and
provide conditions under which it characterizes steady-state behavior when the agent updates her beliefs using Bayes’ rule.
Keywords: Misspecified model, Markov decision process, equilibrium
JEL classification: C61, D83
Full Text:
|
{"url":"https://econtheory.org/ojs/index.php/te/article/viewArticle/20210717/0","timestamp":"2024-11-11T02:04:34Z","content_type":"text/html","content_length":"4148","record_id":"<urn:uuid:a95d9797-d2ac-4bee-976c-4c5b9e531f33>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00739.warc.gz"}
|
associative property multiplication worksheet 3 digits
Associative property worksheets | K5 Learning
Associative property of multiplication | TPT
Associative property of multiplication | Grade1to6
Properties of multiplication worksheet for Grade 3
Printable 3rd Grade Associative Property of Multiplication ...
Properties of Multiplication Worksheets
Free Printable Multiplication Properties Worksheets for 3rd ...
Grade 3 Associative Property of Multiplication Worksheets 2024
Associative property of multiplication | TPT
associative property of multiplication Worksheets
Associative Property Worksheet worksheet
Associative Law of Multiplication (Whole Numbers Only) (A)
Associative property roll | TPT
Properties Worksheets | Free - CommonCoreSheets
Free Printable Multiplication Properties Worksheets for 3rd ...
Associative Property Of Multiplication Worksheets
Associative Property of Addition - Definition & Worksheets
Associative property of multiplication worksheets | Associat… | Flickr
Distributive property worksheets | K5 Learning
Verify the Associative Property — Printable Math Worksheet
Math Properties Worksheets
Multiplication & Associative Property Worksheets (Printable, Online)
Printable Properties of Multiplication Posters | Twinkl USA
50+ Associative Property of Multiplication worksheets for 3rd ...
Free Printable Multiplication Properties Worksheets for 3rd ...
Distributive Property Math Worksheets | Twinkl USA - Twinkl
Multiplication Associative Property 3rd Grade - Math Videos for Kids
Grade 3 Associative Property of Multiplication Worksheets 2024
Using the Associative, Commutative and Distributive Properties for ...
Properties of Multiplication - Math Worksheets
50+ Associative Property of Multiplication worksheets for 3rd ...
Algebra: Use Associative Property of Addition and Multiplication
distributive property of multiplication Worksheets
Free Associative Property of Addition Worksheet - Free Worksheets ...
|
{"url":"https://worksheets.clipart-library.com/associative-property-multiplication-worksheet-3-digits.html","timestamp":"2024-11-12T01:10:07Z","content_type":"text/html","content_length":"28742","record_id":"<urn:uuid:7bf8e258-05ef-4d13-8ecd-3ffe8b8644f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00523.warc.gz"}
|
In previous chapters differential equations, and then transfer functions, were derived for mechanical and electrical systems. These can be converted to the s-domain, as shown in the
mass-spring-damper example in Figure 17.12 A mass-spring-damper example. In this case we assume the system starts undeflected and at rest, so the ’D’ operator may be directly replaced with the
Laplace ’s’. If the system did not start at rest and undeflected, the ’D’ operator would be replaced with a more complex expression that includes the initial conditions.
Figure 17.12 A mass-spring-damper example
Impedances in the s-domain are shown in Figure 17.13 Impedances of electrical components. As before these assume that the system starts undeflected and at rest.
Figure 17.13 Impedances of electrical components
Figure 17.14 A circuit example shows an example of circuit analysis using Laplace transforms. The circuit is analyzed as a voltage divider, using the impedances of the devices. The switch that closes
at t=0s ensures that the circuit starts at rest. The calculation result is a transfer function.
Figure 17.14 A circuit example
At this point two transfer functions have been derived. To state the obvious, these relate an output and an input. To find an output response, an input is needed.
|
{"url":"https://engineeronadisk.com/V2/book_modelling/engineeronadisk-158.html","timestamp":"2024-11-12T03:38:43Z","content_type":"text/html","content_length":"3947","record_id":"<urn:uuid:5edf1d0d-f81b-43eb-8215-d733d5a38b87>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00509.warc.gz"}
|
esh TSDF
Add mesh to mesh TSDF
Since R2024a
isAdded = addMesh(mTSDF,meshStruct) adds one or more meshes to the truncated signed distance field (TSDF), computes the TSDF around the added meshes, and returns an indication of which meshes were
successfully added.
Add Meshes to Mesh TSDF Manager
Create two collision boxes and one collision sphere. The collision boxes represent a static environment and the sphere represents a dynamic obstacle with a pose that could change at any time.
box1 = collisionBox(0.5,1,0.1);
box2 = collisionBox(0.5,0.1,0.2,Pose=trvec2tform([0 -0.45 0.15]));
sph = collisionSphere(0.125,Pose=trvec2tform([-0.1 0.25 0.75]));
title("Static Environment and Dynamic Obstacle")
v = [110 10];
Create a mesh TSDF manager with a resolution of 25 cells per meter.
tsdfs = meshtsdf(Resolution=25);
To improve the efficiency of signed distance field computation, combine meshes that represent the static environment.
staticMeshes = geom2struct({box1,box2});
staticEnv = staticMeshes(1);
staticEnv.Pose = eye(4);
staticEnv.Vertices = [];
staticEnv.Faces = [];
for i = 1:numel(staticMeshes)
H = staticMeshes(i).Pose;
V = staticMeshes(i).Vertices*H(1:3,1:3)'+ H(1:3,end)';
nVert = size(staticEnv.Vertices,1);
staticEnv.Vertices = [staticEnv.Vertices; V];
staticEnv.Faces = [staticEnv.Faces; staticMeshes(i).Faces+nVert];
staticEnv.ID = 1;
Add the static environment mesh to the TSDF manager.
Convert the sphere collision geometry into a structure for the mesh TSDF manager. Assign it an ID of 2 and add it to the mesh TSDF manager.
obstacleID = 2;
dynamicObstacle = geom2struct(sph,obstacleID);
axis equal
title("Mesh TSDFs of Static Environment and Dynamic Obstacle")
Update the pose of the dynamic obstacle in the mesh TSDF manager by changing Pose property of the object handle of the obstacle. Then use the updatePose function to update the pose of the mesh in the
TSDF manager.
dynamicObstacle.Pose = trvec2tform([0.2 0.25 0.2]);
axis equal
title("Updated Dynamic Obstacle Pose")
Input Arguments
mTSDF — Truncated signed distance field for 3-D meshes
meshtsdf object
Truncated signed distance field for 3-D meshes, specified as a meshtsdf object.
Example: meshtsdf(meshes,TruncationDistance=5) creates a TSDF for the specified meshes with a truncation distance of 5 meters.
meshStruct — Geometry mesh structure
structure | N-element structure array
Geometry mesh, returned as a structure or an N-element structure array. N is the total number of collision objects.
Each structure contains these fields:
• ID — ID of the geometry structure stored as a positive integer. By default, the ID of each structure corresponds to the index of the structure in meshStruct. For example, if meshStruct contains
five mesh structures, the first mesh structure at index 1 has an ID of 1, and the last mesh structure at index 5 has an ID of 5.
• Vertices — Vertices of the geometry, stored as an M-by-3 matrix. Each row represents a vertex in the form [x y z] with respect to the reference frame defined by Pose. M is the number of vertices
needed to represent the convex hull of the mesh.
• Faces — Faces of the geometry, stored as an M-by-3 matrix. Each row contains three indices corresponding to vertices in Vertices that define a triangle faces of the geometry. M is the number of
vertices in Vertices.
• Pose — Pose of the geometry as a 4-by-4 homogeneous transformation matrix specifying a transformation from the world frame to the frame in which the vertices are defined.
Data Types: struct
Output Arguments
isAdded — Indication of whether meshes were added or not
true or 1 | false or 0 | N-element vector of logical scalars
Indication of whether meshes were added or not, returned as logical 1 (true) if the mesh was successfully added or 0 (false) if the mesh you are adding a mesh with an ID that already exists in mTSDF.
If meshStruct is an N-element of mesh structures, then isAdded is an N-element vector of logical scalars corresponding to each of the N mesh structures in meshStruct.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Version History
Introduced in R2024a
|
{"url":"https://it.mathworks.com/help/nav/ref/meshtsdf.addmesh.html","timestamp":"2024-11-10T11:20:00Z","content_type":"text/html","content_length":"85929","record_id":"<urn:uuid:67662022-0f94-475c-87d7-094c301ea077>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00426.warc.gz"}
|
Optimisation | Danny James Williams
Numerical Optimisation
The general idea in optimisation is to find a minimum (or maximum) of some function. Generally, our problem has the form \[ \min_{\boldsymbol{x}} f(\boldsymbol{x}). \] Sometimes our problem can be
constrained, which would take the general form \[ \min_{\boldsymbol{x}} f(\boldsymbol{x}) \]\[ \text{subject to } g_i(x) \leq 0 \] for \(i=1,\dots,m\), \(f:\mathbb{R}^n \to \mathbb{R}\). These are
important problems to solve, and it is often that there is no analytical solution to the problem, or the analytical solution is unavailable. This portfolio will explain the most popular numerical
optimisation methods, and those readily available in R.
Optimising a complicated function
To demonstrate the different optimisation methods, the speeds and abilities of each, consider optimising the Rastrigin function. This is a non-convex function that takes the form \[ f(\boldsymbol{x})
= An + \sum^n_{i=1} [x_i^2-A\cos(2\pi x_i)], \]
where \(n\) is the length of the vector \(\boldsymbol{x}\). We can plot this function in 3D using the plotly package to inspect it.
f = function(x) A*n + sum(x^2 - A*cos(2*pi*x))
A = 5
n = 2
x1 = seq(-10,10,length=100)
x2 = seq(-10,10,length=100)
xy = expand.grid(x1,x2)
z = apply(xy,1,f)
dim(z) = c(length(x1),length(x2))
z.plot = list(x=x1, y=x2, z=z)
image(z.plot, xlab = "x", ylab = "y", main = "Rastrigin Function")
f. We can see from inspection of the plot that there is a global minimum at \(\boldsymbol{x} = \boldsymbol{0}\), where \(f(\boldsymbol{0}) = 0\), and likewise:
## [1] 0
So we will be evaluating optimisation methods based on how close they get to this true solution. We continue this portfolio by explaining the different optimisation methods, and evaluating their
performance in finding the global minimum of the Rastrigin function.
When \(n=2\), the gradient and hessian for this function can be calculated analytically: \[ \nabla f(\boldsymbol{x}) = \begin{pmatrix} 2 x_1 + 2\pi A \sin(2\pi x_1) \\ 2 x_2 + 2\pi A \sin(2\pi x_2) \
end{pmatrix} \] \[ \nabla^2 f(\boldsymbol{x}) = \begin{pmatrix} 2 + 4\pi^2 A \cos (2\pi x_1) & 0 \\ 0 & 2 + 4\pi^2 A \cos (2\pi x_2) \end{pmatrix} \] We can construct these functions in R.
grad_f = function(x) {
c(2*x[1] + 2*pi*A*sin(2*pi*x[1]),
2*x[2] + 2*pi*A*sin(2*pi*x[2]) )
hess_f = function(x){
H11 = 2 + 4*pi^2*A*sin(2*pi*x[1])
H22 = 2 + 4*pi^2*A*sin(2*pi*x[2])
These analytical forms of the gradient and hessian can be supplied to various optimisation algorithms to speed up convergence.
Optimisation problems can be one or multi dimensional, where the dimension refers to the size of the parameter vector, in our case \(n\). Generally, one-dimensional problems are easier to solve, as
there is only one parameter value to optimise over. In statistics, we are often interested in multi-dimensional optimisation. For example, in maximum likelihood estimation we are trying to find
parameter values that maximise a likelihood function, for any number of parameters. For the Rastrigin function in our example, we have taken the dimension \(n=2\).
Optimisation Methods
Gradient Descent Methods
Iterative algorithms take the form \[ \boldsymbol{x}_{k+1} = \boldsymbol{x}_k + t \boldsymbol{d}_k, \: \: \text{ for iterations } k=0,1,\dots, \] \(\boldsymbol{d}_k \in \R^n\) is the descent
direction, \(t_k\) is the stepsize. \[ f'(\boldsymbol{x}; \boldsymbol{d})=\nabla f(\boldsymbol{x})^T \boldsymbol{d} < 0. \] So moving \(\boldsymbol{x}\) in the descent direction for timestep \(t\)
decreases the function, so we move towards a minimum. The is the negative gradient of \(f\), i.e. \(\boldsymbol{d}_k = -\nabla f(\boldsymbol{x}_k)\), or normalised \(\boldsymbol{d}_k = {-\nabla f(\
boldsymbol{x}_k)}/{\norm{\nabla f(\boldsymbol{x})}}\). We can construct a general gradient descent method in R and evaluate performance on optimising the Rastrigin function.
gradient_method = function(f, x, gradient, eps=1e-4, t=0.1, maxiter=1000){
converged = TRUE
iterations = 0
while((!all(abs(gradient(x)) < eps))){
if(iterations > maxiter){
cat("Not converged, stopping after", iterations, "iterations \n")
converged = FALSE
gradf = gradient(x)
d = -gradf/abs(gradf)
x = x - t*gradf
iterations = iterations + 1
if(converged) {cat("Number of iterations:", iterations, "\n")
This code essentially will continue running the while loop until the tolerance condition is satisfied, where the change in \(\boldsymbol{x}\) from one iteration to another is negligible. Now we can
see in which cases this will provide a solution to the problem of the Rastrigin function.
gradient_method(f, x = c(1, 1), grad_f)
## Not converged, stopping after 1001 iterations
## $f
## [1] 20.44268
## $x
## [1] -3.085353 -3.085353
gradient_method(f, x = c(.01, .01), grad_f)
## Not converged, stopping after 1001 iterations
## $f
## [1] 17.82949
## $x
## [1] -2.962366 -2.962366
Even when the initial guess of \(x\) was very close to zero, the true solution, this function did not converge. This shows that under a complex and highly varying function such as the Rastrigin
function, the gradient method has problems. This can be improved by including a backtracking line search to dynamically change the value of the stepsize \(t\) to \(t_k\) for each iteration \(k\).
This method reduces the stepsize \(t\) for each iteration \(k\) via \(t_k = \beta t_k\) for \(\beta \in (0,1)\) while \[ f(\boldsymbol{x}_k) - f(\boldsymbol{x}_k + t_k \boldsymbol{d}_k) < -\alpha\
nabla f(\boldsymbol{x}_k)^T \boldsymbol{d}_k. \] and for \(\alpha \in (0,1)\). We can add this to the gradient method function with the line while( (f(x) - f(x + t*d) ) < (-alpha*t * t(gradf)%*%d)) t
= beta*t. Meaning we need to specify \(\alpha\) and \(\beta\). After this is added to the function, we have
gradient_method(f, c(0.01,0.01), grad_f, maxiter = 10000)
## Number of iterations: 1255
## Converged!
## $f
## [1] 5.002503e-09
## $x
## [1] 5.008871e-06 5.008871e-06
Now we finally have convergence! However, this is for when the initial guess was very close to the actual solution, and so in more realistic cases where we don’t know this true solution, this method
is likely inefficient and inaccurate. The Newton method is an advanced form of the basic gradient descent method.
Newton Methods
The Newton method seeks to solve the optimisation problem using evaluations of Hessians and a quadratic approximation of a function \(f\) around \(\boldsymbol{x}_k\). This is under the assumption is
that the Hessian \(\nabla^2 f(\boldsymbol{x}_k)\) is . The unique minimiser of the quadratic approximation is \[ \boldsymbol{x}_{k+1} = \boldsymbol{x}_k - (\nabla^2 f(\boldsymbol{x}_k))^{-1} \nabla f
(\boldsymbol{x}_k), \] which is known as . Here you can consider \((\nabla^2 f(\boldsymbol{x}_k))^{-1} \nabla f(\boldsymbol{x}_k)\) as the descent direction in a scaled gradient method. The nlm
function from base R uses the Newton method. It is an expensive algorithm to run, because it involves inverting a matrix, the hessian matrix of \(f\). Newton methods work a lot better if you can
supply an algebraic expression for the hessian matrix, so that you do not need to numerically calculate the gradient on each iteration. We can use nlm to test the Newton method on the Rastrigin
f_fornlm = function(x){
out = f(x)
attr(out, 'gradient') <- grad_f(x)
attr(out, 'hessian') <- hess_f(x)
nlm(f, c(-4, 4), check.analyticals = TRUE)
## $minimum
## [1] 3.406342e-11
## $estimate
## [1] -4.135221e-07 -4.131223e-07
## $gradient
## [1] 1.724132e-05 1.732303e-05
## $code
## [1] 2
## $iterations
## [1] 3
So this converged to the true solution in a surprisingly small number of iterations. The likely reason for this is due to Newton’s method using a quadratic approximation, and the Rastrigin function
taking a quadratic form.
In complex cases, the hessian cannot be supplied analytically. Even if it can be supplied analytically, in high dimensions the hessian is a very large matrix, which makes it computationally expensive
to invert for each iteration. The BFGS method approximates the hessian matrix, increasing computability and efficiency. The BFGS method is the most common quasi-Newton method, and it is one of the
methods that can be suppled to the optim function. It approximates the hessian matrix with \(B_k\), and for iterations \(k=0,1,\dots\), it has the following basic algorithm:
Initialise \(B_0 = I\) and initial guess \(x_0\).
1. Obtain a direction \(\boldsymbol{d}_k\) through the solution of \(B_k \boldsymbol{d}_k = - \nabla f(\boldsymbol{x}_k)\)
2. Obtain a stepsize \(t_k\) by line search \(t_k = \text{argmin} f(\boldsymbol{x}_k + t\boldsymbol{d}_k)\)
3. Set \(s_k = t_k \boldsymbol{d}_k\)
4. Update \(\boldsymbol{x}_{k+1} = \boldsymbol{x}_k + \boldsymbol{s}_k\)
5. Set \(\boldsymbol{y}_k = \nabla f(\boldsymbol{x}_{k+1}) - \nabla f(\boldsymbol{x}_k)\)
6. Update the hessian approximation \(B_{k+1} = B_k + \frac{\boldsymbol{y}_k\boldsymbol{y}_k^T}{\boldsymbol{y}_k^T \boldsymbol{s}_k} - \frac{B_k \boldsymbol{s}_k \boldsymbol{s}_k^T B_k}{\boldsymbol
{s}_k^T B_k \boldsymbol{s}_k}\)
BFGS is the fastest method that is guaranteed convergence, but has its downsides. BFGS stores the matrices \(B_k\) in memory, so if your dimension is high (i.e. a large amount of parameters), these
matrices are going to be large and storing them is inefficient. Another version of BFGS is the low memory version of BFGS, named ‘L-BGFS’, which only stores some of the vectors that represent \(B_k\)
. This method is almost as fast. In general, you should use BFGS if you can, but if your dimension is too high, reduce down to L-BFGS.
This is a very good but complicated method. Luckily, the function optim from the stats package in R has the ability to optimise with the BFGS method. Testing this on the Rastrigin function gives
optim(c(1,1), f, method="BFGS", gr = grad_f)
## $par
## [1] 0.9899629 0.9899629
## $value
## [1] 1.979932
## $counts
## function gradient
## 19 3
## $convergence
## [1] 0
## $message
## NULL
optim(c(.1,.1), f, method="BFGS", gr = grad_f)
## $par
## [1] 4.61081e-10 4.61081e-10
## $value
## [1] 0
## $counts
## function gradient
## 31 5
## $convergence
## [1] 0
## $message
## NULL
So the BFGS method actually didn’t find the true solution for an initial value of \(\boldsymbol{x} = (1,1)\), but did for when the initial value was \(\boldsymbol{x} = (0.1,0.1)\).
Non-Linear Least Squares Optimisation
The motivating example we have used throughout this section was concerned with optimising a two-dimensional function, of which we were only interested in two variables that controlled the value of
the function \(f(\boldsymbol{x})\). In many cases, we have a dataset \(D = \{\boldsymbol{y},\boldsymbol{x}_i\}\), where we decomopose the ‘observations’ as \(\boldsymbol{y} = g(\boldsymbol{x}) + \
epsilon\), where \(\epsilon\) is a random noise parameter. In this case we are interested in finding an approximation to the data generating function \(g(\boldsymbol{x})\), which we call \(f(\
boldsymbol{x},\boldsymbol{\beta})\), and \(\boldsymbol{\beta}\) are some parameters of whose relationship with \(\boldsymbol{x}\) we model to make this approximation, so we are interested in
optimising over these parameters. The objective function we are minimising over is \[ \min_{\boldsymbol{\beta}} \sum^n_{i=1} r_i^2 = \min_{\boldsymbol{\beta}} \sum^n_{i=1} (y_i - f(x_i,\boldsymbol{\
beta}))^2, \] i.e. the squared difference between the observed dataset and the approximation to the data generating function that defines that dataset. Here, \(r_i = y_i - f(x_i,\boldsymbol{\beta})\)
is known as the residuals, and it is of the most interest in a least squares setting. Many optimisation methods are specifically designed to optimise the least squares problem, but all optimisation
methods can be used (provided they find a minimum). Some of the most popular algorithms for least squares are the Gauss-Newton algorithm and the Levenberg-Marquardt algorithm. Both of these
algorithms are extensions of Newton’s method for general optimisation. The general form of the Gauss-Newton method is \[ \boldsymbol{\beta} \leftarrow \boldsymbol{\beta} - (J_r^TJ_r)^{-1}J_r^Tr_i, \]
where \(J_r\) is the Jacobian matrix of the residue \(r\), defined as \[ J_r = \frac{\partial r}{\partial \boldsymbol{\beta}}. \] So this is defined as the matrix of partial derivatives with respect
to each coefficient \(\beta_i\). The Levenberg-Marquardt algorithm extends this approach by including a diagonal matrix of small entries to the \(J_r^TJ_r\) term, to eliminate the possibility of this
being a singular matrix. This has the update process of \[ \boldsymbol{\beta} \leftarrow \boldsymbol{\beta} - (J_r^TJ_r+\lambda I)^{-1}J_r^Tr_i, \] where \(\lambda\) is some small value. In the
simple case where \(\lambda = 0\), this reduces to the Gauss-Newton algorithm. This is a highly efficient method, but in the case where our dataset is large, we may want to use stochastic gradient
Stochastic Gradient Descent
Stochastic Gradient Descent (SGD) is a stochastic approximation to the standard gradient descent method. Instead of calculating the gradient for an entire dataset (which can be extremely large) it
calculates the gradient for a lower-dimensional subset of the dataset; picked randomly or deterministically. The form of this method is \[ \boldsymbol{x}_{k+1} = \boldsymbol{x}_k - t \nabla f_i(\
boldsymbol{x}_k) \] where \(i\) is an index that refers to cycling through all points \(i \in D\), the points in the dataset. This can be in different sizes of groups, so depending on the problem, \
(i\) can be large or small (relative to the size of the dataset). Stochastic gradient methods are useful in the setting where your dataset is very large, otherwise it could be unnecessary.
|
{"url":"https://dannyjameswilliams.co.uk/portfolios/sc1/optimisation/","timestamp":"2024-11-05T16:19:19Z","content_type":"text/html","content_length":"37832","record_id":"<urn:uuid:7c3b7d0d-50f1-4f36-9106-24cbf86da75f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00698.warc.gz"}
|
matematicasVisuales | Augmented Rhombicuboctahedron
We can add pyramids to a rhombicuboctahedron and we get a beautiful new polyhedron that it is like a star.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the augmented rhombicuboctahedron.
We can separate the pyramids and see that the interior is a rhombicuboctahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the rhombicuboctahedron.
Playing with the interactive application we can change the size of the pyramids.
This beautiful polyhedron is used as an ornament. Sometimes is called Moravian Star.
Ulm (Germany), 2013
Rothenburg ob der Tauber (Germany), 2013
Rothenburg ob der Tauber (Germany), 2013
Rothenburg ob der Tauber (Germany), 2013
W.W. Rouse Ball and H.S.M. Coxeter, 'Matematical Recreations & Essays', The MacMillan Company, 1947.
Peter R. Cromwell, 'Polyhedra', Cambridge University Press, 1999.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the augmented rhombicuboctahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the rhombicuboctahedron.
This polyhedron is also called Elongated Square Gyrobicupola. It is similar to the Rhombicuboctahedron but it is less symmetric.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the dodecahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the truncated octahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the cuboctahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the stellated octahedron (stella octangula).
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the octahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the truncated tetrahedron.
Leonardo da Vinci made several drawings of polyhedra for Luca Pacioli's book 'De divina proportione'. Here we can see an adaptation of the Campanus' sphere.
You can build a Rhombic Dodecahedron adding six pyramids to a cube. This fact has several interesting consequences.
The first drawing of a plane net of a regular octahedron was published by Dürer in his book 'Underweysung der Messung' ('Four Books of Measurement'), published in 1525 .
Using cardboard you can build beautiful polyhedra cutting polygons and glue them toghether. This is a very simple and effective technique. You can download several templates. Then print, cut and
glue: very easy!
A very simple technique to build complex and colorful polyhedra.
Material for a session about polyhedra (Zaragoza, 7th November 2014). We study the octahedron and the tetrahedron and their volumes. The truncated octahedron helps us to this task. We build a cubic
box with cardboard and an origami tetrahedron.
Material for a session about polyhedra (Zaragoza, 23rd Octuber 2015) . Building a cube with cardboard and an origami octahedron.
Material for a session about polyhedra (Zaragoza, 13th Abril 2012).
Material for a session about polyhedra (Zaragoza, 9th May 2014). Simple techniques to build polyhedra like the tetrahedron, octahedron, the cuboctahedron and the rhombic dodecahedron. We can build a
box that is a rhombic dodecahedron.
The compound polyhedron of a cube and an octahedron is an stellated cuboctahedron.It is the same to say that the cuboctahedron is the solid common to the cube and the octahedron in this polyhedron.
|
{"url":"http://www.matematicasvisuales.com/english/html/geometry/pyramidated/augmentedRCO.html","timestamp":"2024-11-07T10:46:28Z","content_type":"text/html","content_length":"29839","record_id":"<urn:uuid:36ff8c5a-4e9f-4fb0-a9c3-c75cecb4f759>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00855.warc.gz"}
|
- CODAP
August 20, 2024 at 6:02 pm #10453
Yes, you can code data based on a selection in the graph window or through a function using Boolean logic. Here’s a general approach that might help you:
1. Using Boolean Logic for Data Selection: You can create a new attribute, such as ‘Angle (deg)’, by applying Boolean logic to your raw data. For example, you could use a function like IF to
assign values based on conditions. In Excel or similar software, it might look like this:
=IF(A2 > 30, “High Angle”, “Low Angle”)
• This would categorize your data into ‚ÄòHigh Angle‚Äô or ‚ÄòLow Angle‚Äô based on your condition.
• Calculating the Mean: Once you’ve categorized your data, you can easily calculate the mean for each category. If you’re using Excel, you might use the AVERAGEIF function:=AVERAGEIF(B2:B100, “High
Angle”, C2:C100)
1. This will calculate the mean of the values in column C where the condition in column B (your new attribute) is “High Angle.”
2. Pasting Values into Multiple Cells: If you’re trying to paste values into multiple cells at once, you can usually do this by copying the desired value, selecting the range where you want to
paste, and then pasting. In Excel, you might use Ctrl+V or right-click and select “Paste.” If you’re dealing with formulas, make sure your cell references are set up correctly to apply the
formula across multiple cells.
If you’re using a specific software tool, the exact steps might differ slightly, but the general principles remain the same.Hope this helps!
|
{"url":"https://codap.concord.org/forums/reply/8952/","timestamp":"2024-11-04T21:30:03Z","content_type":"text/html","content_length":"88193","record_id":"<urn:uuid:3e71a56c-9a43-4ac4-9ec8-a05db2b2f58e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00087.warc.gz"}
|
Master Physics: Proven Tips to Solve Physics Doubts Effectively
How to Solve Physics Doubts
• Don’t panic. Physics problems can seem intimidating at first, but they are based on logical and consistent principles. Take a deep breath and relax before you start solving a problem.
• Try to understand the situation. What is going on in this problem? What are the given quantities and what are you asked to find? What are the relevant concepts and formulas that you need to
• Read the question carefully. Make sure you understand what the question is asking and what information is provided. Sometimes, the question may have multiple parts or hidden assumptions that you
need to identify.
• Organize the information. Write down the given data and the unknown variables in a systematic way. Use symbols, diagrams, tables, or charts to represent the information visually. This will help
you avoid confusion and mistakes later.
• Sketch the scene. If the problem involves motion, forces, fields, or geometrical optics, it is helpful to draw a diagram of the situation. Label the relevant quantities, angles, directions, and
coordinate axes on the diagram. This will help you visualize the problem and apply the appropriate formulas.
• Verify units. Make sure that all the quantities in the problem have consistent units. If not, convert them to a common system of units (SI or CGS). This will help you avoid errors in calculations
and dimensional analysis.
• Consider your formulas. Based on the concepts and principles involved in the problem, choose the formulas that relate the given and unknown quantities. Sometimes, you may need to combine or
manipulate multiple formulas to get the desired equation.
• Solve. Substitute the values of the given quantities into the equation and solve for the unknown variable. Use a calculator or a software tool if needed, but make sure you enter the values
correctly and follow the order of operations. Check your answer for reasonableness and accuracy.
• Check your answer. Compare your answer with the given data and see if it makes sense physically and logically. Does it have the correct units, sign, magnitude, and direction? Does it agree with
your intuition or expectation? If not, try to find out where you went wrong and correct your mistake.
• Review your solution. Go over your solution and see if you can explain each step clearly and logically. Try to identify any gaps or errors in your reasoning or calculations. If possible, try to
solve the problem using a different method or approach and see if you get the same answer. This will help you improve your understanding and confidence in solving physics problems.
Are you looking for the best Physics coaching classes for classes 11th and 12th? Then you should join Umesh Sir’s coaching classes, where you will get offline mode teaching with only 12 students per
batch for personal attention and doubt clearance. Umesh Sir will teach you all the topics of Physics from basics to advanced level with simple explanations and practical examples.
Some of the benefits of joining Umesh Sir’s coaching classes are:
• Daily practice sheets for each chapter
• Regular tests and feedback for each chapter
• Daily doubt session for better learning
Umesh Sir will also help you prepare for competitive exams like NEET and JEE Mains with special batches that will make you solve previous year’s papers, mock tests, tips and tricks, time management
skills, etc.
Don’t miss this opportunity! If you want to master Physics for classes 11th and 12th, enroll in Umesh Sir’s coaching classes now. Call 8882088801 for more details and free trial classes. Hurry up,
seats are limited.
|
{"url":"https://physicseasy.in/how-to-solve-physics-doubts/","timestamp":"2024-11-13T05:24:48Z","content_type":"text/html","content_length":"55036","record_id":"<urn:uuid:a0851034-5e0c-4c1a-a4b1-84b7a3073ce0>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00628.warc.gz"}
|
How to calculate z score in Excel
Notably, to calculate the z score in excel, one must understand how the z score works. Check out the formula for Z-Score
The z score measures several standard deviations above or below the given data points. To calculate the Z score, one has to take the data point, subtract the mean value, and divide it by the standard
deviation value.
To calculate the Z-score, one needs a set of data to provide the mean and the standard deviation.
The Excel formula or Function of Z score
The excel formula or function of calculating the Z score is; =STANDARDIZE (x, mean, standard_dev)
Where x is the raw score
Mean is the average number
Standard_dev is the standard deviation of the values
Process of Calculating Z-Score in Excel (with example)
For instance, an institution wishes to calculate the z score of students perusing different types of courses. This encompasses Bachelor’s Degree courses, Diploma Courses, Vocational Courses,
Certificate courses, and integrated Degree Courses.
Suppose we have a data set of an institution on the courses offered, we can find the z score value for every course offered as follows.
Step 1: Inputting the values of the data given.
Step 2: Calculating the mean of the dataset
The formula for calculating the mean of any given range of values is =AVERAGE (Range of Values)
Thus, the mean is 137
Step 3: Calculating the Standard Deviation of the Dataset
The formula for calculating the standard deviation of any given range of values is = STDVPA (Range of Values)
Thus, the standard deviation is 46.43
Step 4: Finding the z-score for the courses
We use the formula. Check out the formula for Z-Score
The z-values of each course will be;
One can use the formula set in excel for the calculations. The formula that one should input is: =STANDARDIZE (x, mean, standard_dev)
The values will be
How to make a z score chart in excel
To make the z score chart in excel, We have to find the values of the normal distribution.
To find these values, we use the formula =NORM.DIST(x, mean, standard_dev, pmf)
For example, using the above example, we have;
The values of the Normal Distribution will be;
We then insert a graph as follows;
As the Z chart shows, the graph is negatively skewed since the values are concentrated on the right side of the graph, and the left tail of the graph is longer.
How to interpret z-scores in Excel
The z score tells us how many standard deviations away a value is from the mean. Notably, the z value can be positive or negative, ranging from -3 standard deviations up to +3 standard deviations.
A positive z-score shows a value that is more than the mean, a negative z-score implies a value less than the mean, and a z-score of zero denotes a value equal to the mean.
From our above example, the Bachelor’s degree course has 150 students and a z score of 0.27997482. The Bachelor’s degree students are 0.27997482 standard deviations above the mean.
The Diploma course has 125 students and a z score of -0.258438295. The Diploma course students are -0.258438295 standard deviations below the mean.
The Vocational courses have 220 students and a z score of 1.787531543. The vocational course students are 1.787531543 standard deviations above the mean.
The Certificate courses have 100 students and a z score of -0.796851411. The certificate course students are -0.796851411 standard deviations below the mean.
The Integrated Degree has 90 students and a z score of -1.012216657. The integrated degree students are -1.012216657 standard deviations below the mean.
Notably, the greater the distance between a value and the mean, the larger the absolute value of the z-score for that value.
For instance, the number of students taking integrated degree courses is 90, which is further away from the mean (137) compared to the number of students taking Diploma courses which is 125, which
explains why the integrated degree students have a z-score with a larger absolute value.
|
{"url":"https://edutized.com/tutorial/calculate-z-score-excel/","timestamp":"2024-11-04T11:31:04Z","content_type":"text/html","content_length":"77305","record_id":"<urn:uuid:38f91e95-8260-46ec-bec6-d92c33b81473>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00220.warc.gz"}
|
RegularPython|regular python|Python Theory|Python Videos|Python News|Python Blog|Python Interview QuestionsRegularPython|regular python|Python Theory|Python Videos|Python News|Python Blog|Python Interview Questions
Q1). How do you handle missing values in Pandas?
Handling missing values is crucial for data analysis. You can use methods like fillna() to replace missing values or dropna() to remove rows or columns with missing values. Example:
# Filling missing values with the mean of the column
data.fillna(data.mean(), inplace=True)
Real-World Scenario: If you're analyzing customer data and some entries have missing ages, you might fill these missing values with the average age to ensure your analysis isn't skewed by missing
Q2). How do you handle duplicate rows in a DataFrame?
To handle duplicate rows, you can use the drop_duplicates() method, which removes duplicate rows based on all or specific columns. Example:
# Removing duplicate rows based on all columns
Real-World Scenario: If you're compiling a list of customers who made a purchase, and you accidentally have multiple entries for the same customer, using drop_duplicates() ensures each customer is
only represented once in your analysis.
Q3). How do you handle time series data in Pandas?
Pandas provides tools for working with time series data, such as resampling, shifting, and rolling window operations. You can also convert columns to datetime format using pd.to_datetime(). Example:
# Converting a column to datetime and resampling sales data by month
sales_data['Date'] = pd.to_datetime(sales_data['Date'])
monthly_sales = sales_data.resample('M', on='Date')['Sales'].sum()
Real-World Scenario: If you're analyzing sales data that is recorded daily, you can resample this data to monthly totals to see the sales trend over time.
Q4). How do you merge DataFrames with different shapes?
When merging DataFrames with different shapes, you can specify the type of join: inner (intersection), outer (union), left (left DataFrame's keys), or right (right DataFrame's keys). Example:
# Merging with an outer join to include all records from both DataFrames
merged_data = pd.merge(df1, df2, on='Product_ID', how='outer')
Real-World Scenario: If you have sales data for two different periods and want to merge them, but some products are missing in one period, an outer join ensures you keep all the records from both
Q5). How do you filter a DataFrame based on a condition?
You can filter a DataFrame based on a condition using boolean indexing. This allows you to select rows that meet a specific condition. Example:
# Filtering rows where sales are greater than 1000
high_sales = sales_data[sales_data['Sales'] > 1000]
Real-World Scenario: If you're interested in finding only the transactions where sales exceeded $1000, you can use this method to filter the data and focus on high-value transactions.
Q6). What is the difference between .loc and .iloc?
.loc[] is used for label-based indexing and allows you to access a group of rows and columns by labels or a boolean array, while .iloc[] is used for integer-based indexing, allowing you to access
rows and columns by position. Example:
# Using .loc[] to access rows with specific labels
region_sales = sales_data.loc[sales_data['Region'] == 'West', ['Sales', 'Profit']]
# Using .iloc[] to access specific rows and columns by position
subset = sales_data.iloc[0:5, 1:4]
Real-World Scenario: If you want to access sales and profit data for the 'West' region, you use .loc[]. If you're interested in accessing the first five rows and specific columns based on their
position, you use .iloc[].
Q7). How do you rename columns in a DataFrame?
You can rename columns in a DataFrame using the rename() function, where you pass a dictionary mapping the old column names to the new ones. Example:
# Renaming columns 'Sales' to 'Total_Sales' and 'Profit' to 'Total_Profit'
sales_data.rename(columns={'Sales': 'Total_Sales', 'Profit': 'Total_Profit'}, inplace=True)
Real-World Scenario: If your dataset has generic column names like 'Sales' and 'Profit', and you want to make them more descriptive, you can rename them to 'Total_Sales' and 'Total_Profit' for
Q8). What is the difference between a DataFrame and a Series in Pandas?
A DataFrame is a 2-dimensional labeled data structure with columns of potentially different types, similar to a table in a database or an Excel spreadsheet. A Series is a 1-dimensional labeled array,
similar to a single column or row of data. Example:
# Creating a DataFrame
df = pd.DataFrame({'Product': ['A', 'B', 'C'], 'Sales': [100, 150, 200]})
# Creating a Series
sales_series = pd.Series([100, 150, 200], name='Sales')
Real-World Scenario: If you're working with data for multiple products (like sales and profit), you'd use a DataFrame. If you're only interested in the sales data, a Series is sufficient.
Q9). How do you sort a DataFrame by a column?
You can sort a DataFrame by a column using the sort_values() function, specifying the column to sort by and the sort order. Example:
# Sorting the DataFrame by the 'Sales' column in descending order
sorted_data = sales_data.sort_values(by='Sales', ascending=False)
Real-World Scenario: If you want to analyze which products have the highest sales, you can sort your DataFrame by the 'Sales' column to quickly identify the top performers.
Q10). How do you filter rows based on multiple conditions?
You can filter rows based on multiple conditions by combining boolean conditions using the & (and) and | (or) operators. Example:
# Filtering rows where 'Sales' are greater than 1000 and 'Region' is 'West'
filtered_data = sales_data[(sales_data['Sales'] > 1000) & (sales_data['Region'] == 'West')]
Real-World Scenario: If you're looking for high-value transactions in a specific region, you can use this method to filter the data based on multiple criteria.
Q11). How do you apply a function to a DataFrame column?
You can apply a function to a DataFrame column using the apply() method, which allows you to pass a function that will be applied to each element of the column. Example:
# Applying a function to calculate the length of each product name
sales_data['Product_Length'] = sales_data['Product'].apply(len)
Real-World Scenario: If you want to add a column that shows the length of each product name, you can use apply() to compute this based on the existing 'Product' column.
Q12). How do you pivot a DataFrame?
You can pivot a DataFrame using the pivot() function, which reshapes the data based on column values, creating a new DataFrame where rows are transformed into columns. Example:
# Pivoting the DataFrame to show 'Sales' by 'Product' and 'Date'
pivoted_data = sales_data.pivot(index='Date', columns='Product', values='Sales')
Real-World Scenario: If you want to analyze sales by product over different dates, pivoting allows you to create a table where each product has its own column, and sales data is shown for each date.
Q13). How do you group data in Pandas?
You can group data in Pandas using the groupby() function, which allows you to group rows based on column values and then perform aggregate operations on these groups. Example:
# Grouping sales data by 'Product' and calculating the sum of sales for each product
grouped_data = sales_data.groupby('Product')['Sales'].sum()
Real-World Scenario: If you want to know the total sales for each product, grouping by 'Product' and summing the sales gives you a clear picture of each product's performance.
Q14). How do you read a CSV file into a DataFrame?
You can read a CSV file into a DataFrame using the read_csv() function. This function loads data from a CSV file into a DataFrame, which is a table-like data structure. Example:
# Reading data from a CSV file into a DataFrame
sales_data = pd.read_csv('sales_data.csv')
Real-World Scenario: If you have sales data stored in a CSV file, you can use read_csv() to load this data into a DataFrame for further analysis.
Q15). How do you save a DataFrame to a CSV file?
You can save a DataFrame to a CSV file using the to_csv() function. This function exports the DataFrame's data to a CSV file, which can be shared or stored. Example:
# Saving the DataFrame to a CSV file
sales_data.to_csv('sales_data.csv', index=False)
Real-World Scenario: After analyzing or processing data, you might want to save the results to a CSV file for reporting or sharing with colleagues.
Q16). How do you drop rows or columns from a DataFrame?
You can drop rows or columns from a DataFrame using the drop() method, specifying the axis parameter to indicate whether you're dropping rows (axis=0) or columns (axis=1). Example:
# Dropping a column 'Profit' from the DataFrame
sales_data.drop('Profit', axis=1, inplace=True)
Real-World Scenario: If 'Profit' is no longer needed for your analysis, you can drop it from the DataFrame to simplify your data and avoid confusion.
Q17). How do you handle categorical data in Pandas?
You can handle categorical data using the astype('category') method to convert columns to categorical data types. This can save memory and speed up operations. Example:
# Converting the 'Region' column to a categorical data type
sales_data['Region'] = sales_data['Region'].astype('category')
Real-World Scenario: If you have columns with a limited number of unique values, such as 'Region', converting them to categorical data can improve performance and reduce memory usage.
Q18). How do you deal with outliers in a DataFrame?
You can deal with outliers by identifying them using statistical methods like IQR (Interquartile Range) or Z-scores, and then handling them either by removing or adjusting them. Example:
# Identifying outliers using IQR
Q1 = sales_data['Sales'].quantile(0.25)
Q3 = sales_data['Sales'].quantile(0.75)
IQR = Q3 - Q1
outliers = sales_data[(sales_data['Sales'] < (Q1 - 1.5 * IQR)) | (sales_data['Sales'] > (Q3 + 1.5 * IQR))]
Real-World Scenario: If your sales data has some extreme values that could skew analysis, you can identify and handle these outliers to ensure more accurate results.
Q19). How do you concatenate DataFrames?
You can concatenate DataFrames using the concat() function, which allows you to combine them along a particular axis (rows or columns). Example:
# Concatenating two DataFrames along rows
combined_data = pd.concat([df1, df2], axis=0)
Real-World Scenario: If you have sales data for different regions in separate DataFrames, you can concatenate them to create a single DataFrame with all the data.
Q20). How do you reshape a DataFrame?
Reshaping a DataFrame can be done using methods like melt() to unpivot the DataFrame or pivot_table() to create a pivot table. Example:
# Melting a DataFrame to long format
melted_data = pd.melt(df, id_vars=['Product'], value_vars=['Q1', 'Q2', 'Q3', 'Q4'], var_name='Quarter', value_name='Sales')
Real-World Scenario: If your sales data is spread across multiple columns for different quarters, you can melt it into a long format where each row represents sales for a specific quarter, making it
easier to analyze trends over time.
Q21). How do you handle date and time in Pandas?
Pandas has robust functionality for handling date and time data using functions like pd.to_datetime() for conversion and datetime properties for extraction. Example:
# Converting a column to datetime and extracting the year
sales_data['Date'] = pd.to_datetime(sales_data['Date'])
sales_data['Year'] = sales_data['Date'].dt.year
Real-World Scenario: If you have a column with dates and need to extract the year for analysis, you can convert the column to datetime and then extract the year for each record.
Q22). How do you use apply() with multiple arguments?
You can use apply() with multiple arguments by passing a function that accepts multiple parameters, and using the args parameter to provide additional arguments. Example:
# Applying a function with multiple arguments
def calculate_discount(price, discount):
return price - (price * discount)
sales_data['Discounted_Price'] = sales_data.apply(calculate_discount, args=(0.1,), axis=1)
Real-World Scenario: If you need to apply a discount to each product price, you can use apply() with a custom function that takes both the price and discount rate as arguments.
Q23). How do you handle large datasets with Pandas?
Handling large datasets can be done using techniques such as chunking with read_csv() to process the data in smaller pieces, and using efficient data types. Example:
# Reading a large CSV file in chunks
chunk_iter = pd.read_csv('large_data.csv', chunksize=10000)
for chunk in chunk_iter:
Real-World Scenario: If you're working with a large dataset that cannot fit into memory, reading it in chunks allows you to process each chunk separately and avoid memory issues.
Q24). How do you perform aggregation in Pandas?
Aggregation in Pandas can be done using functions like groupby() combined with aggregate functions such as sum(), mean(), and count(). Example:
# Aggregating data to get total sales by 'Product'
total_sales = sales_data.groupby('Product')['Sales'].agg('sum')
Real-World Scenario: To find out the total sales for each product, you use aggregation to sum the sales for each product category.
Q25). How do you merge DataFrames on multiple columns?
You can merge DataFrames on multiple columns by specifying a list of column names in the on parameter of the merge() function. Example:
# Merging DataFrames on multiple columns
merged_data = pd.merge(df1, df2, on=['Product_ID', 'Region'], how='inner')
Real-World Scenario: If you have sales data in two DataFrames with both 'Product_ID' and 'Region' as common columns, merging on these columns ensures you combine the data accurately based on both
Q26). How do you handle duplicate index values?
Handling duplicate index values involves resetting the index using reset_index() or reindexing with a unique index. Example:
# Resetting the index to handle duplicates
cleaned_data = sales_data.reset_index(drop=True)
Real-World Scenario: If your DataFrame has duplicate index values causing confusion, resetting the index can provide a unique, sequential index that simplifies data handling.
Q27). How do you handle missing data in a DataFrame?
Handling missing data can be done using methods such as fillna() to replace missing values or dropna() to remove rows or columns with missing values. Example:
# Filling missing values in 'Sales' column with 0
sales_data['Sales'].fillna(0, inplace=True)
Real-World Scenario: If your data has missing sales figures, you can fill these with 0 to avoid issues during analysis or computation.
|
{"url":"https://regularpython.com/python-tutorial-multiple-choice-questions-and-answers/questions-results/26/","timestamp":"2024-11-06T17:52:53Z","content_type":"text/html","content_length":"41225","record_id":"<urn:uuid:e5bd4447-72b6-4c6b-a049-aed115138af6>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00701.warc.gz"}
|
Earnings Per Share | Advantages and Limitations of Earnings Per Share
Updated July 5, 2023
Earnings Per Share Meaning
Earnings per share (EPS) is a financial performance indicator that helps calculate a company’s profitability by dividing net income and weighted shares outstanding.
It helps investors calculate the capital generated from every share it holds, allowing them to gauge its profitability. For Instance, Amazon’s EPS for the third quarter of 2022 is $0.28. Therefore,
each of Amazon’s investors received $0.2 for each dollar they invested.
A higher ratio is preferable because it indicates a better return on the amount invested in purchasing the share. Therefore, investors usually consider this metric and the share price as significant
decision-making factors while analyzing the stocks of companies within the same industry.
Key Highlights
• Earnings Per Share is a financial ratio that measures a company’s profitability and analyzes each stockholder’s income.
• We can calculate it by subtracting preferred shares from the net income and dividing it by the number of outstanding shares.
• It is of five types: retained, cash, book value, etc.
• It indicates a company’s profit for each share of its stocks and finds its market value.
• It is essential for shareholders or investors as it helps them make wise yet profitable decisions while purchasing a stock.
The formula divides the net income generated and the distributed preferred dividend by the weighted average number of outstanding shares.
Mathematically, the equation is,
EPS = (Net Income – Preferred Dividend) / Weighted Average No. Of Shares Outstanding
• Net income is the amount that relates to shareholder equity after deducting the costs and expenses from a company’s profit or income.
• A preferred dividend is the sum of dividends due on a company’s selected stock from the company’s earnings.
• Weighted average no. of shares outstanding: It is the number of outstanding shares of a company estimated after accounting for share capital variations over a fiscal quarter
Earnings Per Share on Income Statement
We can usually find it on a company’s income statement. It is present after the net income, i.e., at the end of the consolidated income statement. Here only the overall value is present.
(Image Source: Walmart Annual Report 2022)
Later in the annual report, there is a breakdown for each component from the financial statement. Each part is in a note with a serial number. As seen below, the per-share breakdown is available in
the second note.
(Image Source: Walmart Annual Report 2022)
How to Calculate? – Real Company Excel Examples
Example #1:
Let us take the example of Apple Inc. to illustrate the computation. The company reported a net income of $99.8 billion in 2022. Calculate the EPS of Apple Inc. for the year 2022 if the weighted
average common shares outstanding is 16.21 million.
(Image Source: Apple Inc. Annual Report 2022)
Implementing the formula,
Therefore, Apple Inc. generated an EPS of $6.16 per share in 2022.
Example #2:
Let us take the example of Walmart Inc. to illustrate the computation. The company reported a net income of $13.94 billion in 2022, of which $0.26 billion is for the non-controlling interest.
Calculate the EPS of Walmart Inc. for the year 2022 if the weighted average common shares outstanding is 2.80 billion.
(Image Source: Walmart Inc. Annual Report 2022)
Implementing the formula,
Therefore, Walmart Inc.’s EPS for the year 2022 stood at $4.89 per share.
Example #3
Let us take the example of Amazon.Com.Inc to illustrate the computation. The company reported a net income of $11.58 billion in 2021. Calculate the EPS of Amazon for the year 2021 if the weighted
average common shares outstanding is 0.49 billion.
(Image Source: Amazon.Com.Inc. Annual Report 2021)
Implementing the formula,
Therefore, Amazon generated an EPS of $ in 2021.
More Examples
Example #1:
For the year that ended on Dec. 31, 2021, Company XYZ reported a net income of $525,000 and $150,000 common shares outstanding. The company had no preferred stock and neither issued nor repurchased
new shares. Therefore, calculate the EPS of Company XYZ.
Implementing the formula,
Therefore, the EPS = $3.5 per share
Example #2:
A Telecom Company generated a net income of $450,000 and had 100,000 outstanding shares for the first six months and 230,000 for the year’s second half. The company paid $50,000 in preferred
dividends during this year. Therefore, calculate the EPS of the Telecom company.
Let us first calculate the weighted average,
So, the weighted average outstanding shares are 165,000.
Implementing the formula,
Therefore, the EPS = $2.42 per share
Basic vs. Diluted EPS
Basic Diluted
It is the basic earnings of a company per outstanding share. The revenue of a company is applicable for every convertible share.
It is less important to the investors as it doesn’t reflect the convertible shares. It is of more importance to investors as it holds convertible shares.
It helps calculate the net profit made by a company. It also helps in calculating the net profit but with convertible securities.
It is not very accurate and entails detailed information. It is very accurate and gives very detailed information.
It is effortless to comprehend. It is complex and hard to comprehend.
It uses the common shares for calculation. It uses common and preferred shares, stock options, debt, warrants, etc., for calculation.
#1 Reported or GAAP
• It is the figure obtained from GAAP or Generally Accounting Principles.
• GAAP can alter the reported income of a company
• For instance, if a company counts the regular expense as unusual, it will be excluded from the calculation, thus boosting the value artificially.
#2 Ongoing or Pro-Forma
• It entirely depends on the general net income, excluding everything that could be an unusual one-time event.
• It is unarguably the best indicator for future EPS as it helps to find out the source of earnings from the base operations.
#3 Carrying or Book Value
• It helps determine the amount of a company’s equity in each share.
• It is majorly as per the balance sheet. Therefore, it is considered a stationary representation of a company’s performance.
#4 Retained
• It represents the profit a business chooses to keep rather than pay out as dividends to its shareholders.
• It is calculated by adding the net earnings to the current retained profits and subtracting the total dividend paid. Ultimately, divide the remainder by the total number of weighted average
#5 Cash
• It is beneficial as it helps find a company’s financial position in the market.
• It is a raw number (as it depicts the real profit earned) and, thus, can’t be altered like the net income.
How do the Stock Dividends & Splits Affect it?
Stock dividends are the dividend allocation of stocks instead of capital to existing shareholders. It does not entirely affect the share earnings as the increase in the number of shares is
negligible. However, if the number of shares increases by a vast number, the EPS value will drop to a lower ratio.
On the contrary, stock splits are the division of existing shares into a definite proportion. Due to the split declaration, there’s an increase in the number of shares a company holds. Thus, it will
result in lower EPS since the number of shares grows while the profit earned remains constant.
For Example, Company ABC did a 5-1 stock split in November 2022. Its EPS before the split was $10, while total shares stood at 100,000. After the split, the total shares are 500,000, which lowers the
EPS to $2. Their profit both before and after the split was constant, i.e., $1,000,000.
Earnings Per Share vs. P/E Ratio
Earnings Per Share P/E Ratio
It indicates a company’s profitability. The P/E pr price earning ratio is an indicator of a stock’s valuation for a company.
Dividing a company’s net income by the weighted average number of shares. Dividing a company’s stock value by the EPS.
It helps investors and analysts assess a company’s stock before The P/E ratio helps compare a company’s stock to its competitors. It can also help find out if a stock is over or
investing. undervalued.
It doesn’t consider a company’s debt. It is entirely according to the historical data of a company.
Why is it Important for Shareholders?
• Investors need to pay close attention to this value as it can drive the stock price. The stock price will probably increase if a company’s per-share earnings are high.
• It helps investors know if investing in a specific company will be profitable. A company with a consistent ratio indicates its profitability, making ways for it to pay higher dividends.
• It is an excellent indicator of a company’s performance and helps choose a promising company since it is the input in the P/E or Price or earning ratio.
• It can even help understand a company’s financial status and gauge its historical performance.
Advantages & Disadvantages
Advantages Disadvantages
It usually is a measure to price the stocks such that stocks with higher value attract higher prices. The companies can manipulate this value by reducing the number of outstanding shares by
buying back their own or reverse splitting stocks.
It captures the overall profit per share after paying off all the liabilities, such as interest on the It doesn’t capture the company’s performance as it fails to consider the share’s price.
debt, the dividend for preference shareholders, etc.
The calculation is straightforward, where we take the total income and divide it by the outstanding If a company is going into a loss, it has a negative ratio, and it isn’t easy to measure such
shares. a company.
Use the following calculator for the Earnings Per Share calculations:
Net Income
Preferred Dividend
Weighted Average No. of Shares Outstanding
Earnings Per Share =
Earnings Per Share = (Net Income - Preferred Dividend) / Weighted Average No. of Shares Outstanding
= (0 - 0)/ 0 = 0
Final Thoughts
Although EPS is a critical metric for investors, it should be seen with the share price to draw meaningful insights. One should also keep track of any change in the outstanding number of shares due
to stock splits, reverse splits, share repurchases, etc.
Frequently Asked Questions (FAQs)
Q.1 What do earnings per share mean?
Answer: Earnings per share is a financial method of calculating a company’s overall profit by dividing its net income by the average number of shares outstanding.
The formula for the same is
EPS = (Net Income – Preferred Dividend) / Weighted Average No. of Shares Outstanding Share.
Q.2 What is a good earnings-per-share ratio?
Answer: A good EPS depends entirely on the company and its performance under market expectations. Typically, a higher ratio contributes to a profitable deal. However, a higher value is not an
indicator of future performance.
Q.3 How do you calculate earnings per share?
Answer: The EPS is the company’s total or net income for every outstanding share. We calculate it by dividing the company’s total revenue by the weighted a of common shares outstanding.
Q.4 What are the types of earnings per share?
Answer: There are five types of earnings per share which are named as Reported or GAAP, Ongoing or Pro Forma, Carrying or Book Value, Retained, and Cash. An investor must understand and comprehend
all the types to make an effective stock decision.
Q.5 What does it mean if EPS is negative?
Answer: An EPS Is negative if a company’s net income is negative, indicating the company is spending or losing more money than earning. However, it is essential to understand that a negative value
doesn’t always relate to the stock sale.
Q.6 What is adjusted earnings per share?
Answer: Adjusted EPS is a calculation where an analyst deploys some adjustments to the numerator part. The aim is to remove or add the derivatives of net income that are non-recurring.
Q.7 How to increase earnings per share?
Answer: A company may look to repurchase its shares in the market to improve or increase its EPS. This way, a company doesn’t have to increase its net income. Furthermore, a good ratio always comes
from dividing net income by fewer shares.
Recommended Articles
This is a guide to Earnings Per Share. Here we discuss how to calculate it along with real-life examples. You may also look at the following articles to learn more –
|
{"url":"https://www.educba.com/earnings-per-share/","timestamp":"2024-11-09T06:21:22Z","content_type":"text/html","content_length":"351519","record_id":"<urn:uuid:6ed929f7-0bc4-470d-8e59-a36181808fdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00247.warc.gz"}
|
Cosine Rule Finding Angle Worksheet - Angleworksheets.com
Cosine Rule Finding Angles Worksheet – If you have been struggling to learn how to find angles, there is no need to worry as there are many resources available for you to use. These worksheets will
help you understand the different concepts and build your understanding of these angles. Using the vertex, arms, arcs, and … Read more
Cosine Rule Finding Angle Worksheet
Cosine Rule Finding Angle Worksheet – If you have been struggling to learn how to find angles, there is no need to worry as there are many resources available for you to use. These worksheets will
help you understand the different concepts and build your understanding of these angles. Using the vertex, arms, arcs, and … Read more
|
{"url":"https://www.angleworksheets.com/tag/cosine-rule-finding-angle-worksheet/","timestamp":"2024-11-09T12:47:37Z","content_type":"text/html","content_length":"52209","record_id":"<urn:uuid:a88a6b87-6b15-4cdc-8c22-4a251c35dfc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00796.warc.gz"}
|
International Baccalaureate Mathematics
To find the solutions of the inequality \(g(x) < f(x)\), you need to identify the values of \(x\) that make \(g(x)\) less than \(f(x)\). Here are the general steps to follow:
1. Subtract \(f(x)\) from both sides of the inequality to get \(g(x) - f(x) < 0\).
2. Find the critical values of \(x\) where \(g(x) - f(x) = 0\). These values are where the inequality may change.
3. Test each interval between the critical values by choosing a test point within it and plugging it into the inequality. If the test point satisfies the inequality, then all points in that interval
will satisfy the inequality. If it does not, then none of the points in that interval will satisfy the inequality.
Here are a couple of examples to illustrate this process:
Example 1:
Find the solutions of the inequality \(x^2 - 3x < 2x - 1\).
Subtracting \(2x - 1\) from both sides gives \(x^2 - 5x + 1 < 0\). Next, we find the critical values of \(x\) by setting \(x^2 - 5x + 1 = 0\) and solving for \(x\) using the quadratic formula:
$$x = \frac{5 \pm \sqrt{21}}{2}$$
These values divide the real number line into three intervals: \((-\infty, \frac{5-\sqrt{21}}{2})\), \((\frac{5-\sqrt{21}}{2}, \frac{5+\sqrt{21}}{2})\), and \( (\frac{5+\sqrt{21}}{2}, \infty)\). Now
we choose a test point in each interval and plug it into the inequality:
• \(-2\) does not satisfy \(x^2 - 5x + 1 < 0\) for \((-\infty, \frac{5-\sqrt{21}}{2})\).
• 3 satisfies \(x^2 - 5x + 1 < 0\) for \((\frac{5-\sqrt{21}}{2}, \frac{5+\sqrt{21}}{2})\).
• 6 does not satisfy \(x^2 - 5x + 1 < 0\) for \( (\frac{5+\sqrt{21}}{2}, \infty)\).
Therefore, the solutions of the inequality are \( \frac{5-\sqrt{21}}{2} < x < \frac{5+\sqrt{21}}{2}\).
|
{"url":"https://transum.org/Maths/National_Curriculum/Topics.asp?ID_Statement=399","timestamp":"2024-11-13T02:05:13Z","content_type":"text/html","content_length":"19577","record_id":"<urn:uuid:f2043d60-185b-4fa9-88e4-c9c5304596bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00012.warc.gz"}
|
Quadratic Formula Calculator (Maths solver)
Unlocking the Power of the Quadratic Formula Calculator: A Student’s Guide
Mathematics can often feel like a maze, especially when navigating through the complexities of algebra. Among the many tools available to students, the quadratic formula calculator stands out as a
beacon of hope for solving quadratic equations. In this article, we’ll explore what the quadratic formula is, how to utilize a calculator for these equations, and the best practices for mastering
this essential mathematical tool.
Understanding Quadratic Equations
Before diving into the calculator aspect, it’s crucial to understand what a quadratic equation is. A quadratic equation is an algebraic expression of the form:
[ ax^2 + bx + c = 0 ]
where ( a ), ( b ), and ( c ) are constants, and ( a \neq 0 ). The solutions, which are the values of ( x ) that satisfy this equation, can be found using the quadratic formula:
[ x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a} ]
The Significance of the Quadratic Formula
The quadratic formula is vital because it provides a straightforward way to determine the roots of any quadratic equation. Understanding when and how to apply this formula can help students enhance
their problem-solving skills and boost their confidence in mathematics.
Utilizing a Quadratic Formula Calculator
With the introduction of technology, solving quadratic equations has never been easier. A quadratic formula calculator allows students to input the coefficients ( a ), ( b ), and ( c ) and directly
obtain the roots of the equation.
How to Use a Quadratic Formula Calculator
1. Identify your coefficients:
• Write down the values for ( a ), ( b ), and ( c ) from your quadratic equation.
2. Input the coefficients:
• Enter these values into the calculator. Most online calculators will have designated fields for each coefficient.
3. Calculate the roots:
• Click the ‘Calculate’ button. The calculator will process the inputs and return the roots quickly.
4. Interpret the results:
• The output will typically include two values, corresponding to the two potential solutions the quadratic formula provides.
By following these simple steps, students can efficiently solve quadratic equations and save valuable time during exams or homework assignments.
Tip: Always double-check your inputs to avoid errors in calculations!
Advantages of Using a Quadratic Formula Calculator
Using a quadratic formula calculator comes with a variety of benefits:
• Saves Time: No need to manually calculate complex equations.
• Increases Accuracy: Reduces the risk of human error during calculations.
• Provides Instant Feedback: Students can quickly verify their homework or test answers.
• Enhances Understanding: By seeing the steps laid out, students can learn the solving process.
Real-World Applications of Quadratic Equations
Understanding quadratic equations and using a calculator to solve them extends far beyond the classroom. Here are a few examples of where these concepts are utilized:
1. Physics:
• Calculating projectile motion, such as the path of a thrown object.
2. Engineering:
• Designing structures, where precise measurements and predictions of forces are crucial.
3. Business:
• Analyzing profit maximization in quadratic revenue functions.
Tips for Mastering the Quadratic Formula
For students looking to deepen their understanding of the quadratic formula and its applications, consider these helpful strategies:
Practice Regularly: The more problems you solve using the quadratic formula, the more comfortable you will become.
Understand the Concept: Instead of just memorizing the formula, try to comprehend what it represents and how it derives from the format of the quadratic equation.
Resources for Further Learning
For those eager to explore more about quadratic equations, consider the following resources:
• Educational platforms like Khan Academy offer comprehensive tutorials on the topic.
• Online calculators such as Symbolab and Calculator Soup allow students to practice and visualize quadratic solutions interactively.
The quadratic formula calculator is an invaluable tool for students tackling quadratic equations. By understanding how to effectively use this calculator, you not only enhance your mathematical
skills but also gain insights into solving real-world problems.
So next time you encounter a challenging quadratic equation, remember that technology is here to help you navigate through it. Embrace the calculator, practice regularly, and let your confidence in
math soar!
Feel free to share your thoughts or experiences with quadratic equations in the comments below. Let’s unlock the mysteries of math together!
Popular Tools
|
{"url":"https://calculator3.com/quadratic-formula-calculator-maths-solver/","timestamp":"2024-11-05T17:23:29Z","content_type":"text/html","content_length":"65227","record_id":"<urn:uuid:9fb71455-6fda-4869-8121-6ea0270de36f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00830.warc.gz"}
|
[GAP Forum] [MCA2017] Computations in Groups and Applications in Montréal
1337777.OOO 1337777.ooo at gmail.com
Tue May 30 12:37:20 BST 2017
solves some question of Cartier which is how to program grammatical
meta (metafunctors) ...
This starting lemma of polymorph mathematics ( "categories" ) :
Coreflections( Set , Funtors( op C , Prop ) ) <=> Funtors( C , Set )
says that the senses ( "metafunctors models" ) onfrom some given
primitive-syntax graph may be instead dually viewed as senses (
"coreflective-metafunctors models" ) into some more-complete
metafunctors-grammar ( "classifying topos" ). And this starting lemma
may be upgraded such to perceive flat metafunctors via geometric
morphisms into this metafunctors-grammar. Also this starting lemma may
be upgraded such to perceive continuous flat metafunctors via
geometric morphisms into some sheaf metafunctors-grammar ...
The question is whether these new more-complete metafunctors-grammars
are relatively computational/decidable ? This COQ text solves half of
this question, the resting half is promised only ...
Outline: Primo, the things shall be confined to some meta regular
cardinal, and this COQ text assumes-as-axiom some maximum operation
inside this regular cardinal and this COQ text assumes-as-axiom some
functional extensionality of families of morphisms which are confined
to this regular cardinal. Secondo, as was done in the earlier COQ text
for colimits, one shall erase/extract some logical cocone-conditions
by assuming some erasure/extraction scheme as axiom instead of some
very-complicated-induction scheme (beyond induction-induction) ...
Tertio, the degradation lemma is more technical than in the earlier
COQ texts, because for the congruent-reduction from the copairing
operation applied onto some cocone of morphisms, one shall require
simultaneous full-reduction of every reductible morphism in the
cocone. Most of this COQ program and deduction is automated.
Keywords: 1337777.OOO//cartierSolution.v ; metafunctors-grammar ;
duality ; classifying topos
Memo :
The 1337777.OOO SOLUTION PROGRAMME originates from some random-moment
discovery of some convergence of the DOSEN PROGRAMME
[[http://www.mi.sanu.ac.rs/~kosta]] along the COQ PROGRAMME
The 1337777.OOO has discovered [[1337777.OOO//coherence2.v]]
[fn:4] [fn:5] that the attempt to deduce associative coherence by
Maclane is not the reality, because this famous pentagone is in fact
some recursive square. This associative coherence is the meta of the
semiassociative coherence [[1337777.OOO//coherence.v]] which does lack
some more-common Newman-style confluence lemma.
Moreover the 1337777.OOO has discovered
[[1337777.OOO//borceuxSolution2.v]] [[1337777.OOO//chic05.pdf]] that
the "categories" ( "enriched categories" ) only-named by the
homologist Maclane are in reality interdependent-cyclic with the
natural polymorphism of the logic of Gentzen, this enables some
programming of congruent resolution by cut-elimination
[[1337777.OOO//dosenSolution3.v]] which will serve as specification
(reflection) technique to semi-decide the questions of coherence, in
comparasion from the ssreflect-style.
Furthermore the 1337777.OOO has discovered
[[1337777.OOO//ocic04-where-is-combinatorics.pdf]] that the
Galois-action for the resolution-modulo ( "symmetry groupoid action"
), is in fact some instance of polymorph functors.
And the 1337777.OOO has discovered
[[1337777.OOO//laoziSolution2.v]] how to program polymorph
coparametrism functors ( "comonad" ).
And the 1337777.OOO has discovered [[1337777.OOO//chuSolution.v]] how
to program contextual limits of polymorph functors ( "Kan extension"
And the 1337777.OOO has discovered [[1337777.OOO//cartierSolution.v]]
how to program the metafunctors-grammar ( "topos" ), as the primo step
towards the programming of the ( "classifying-topos" )
sheaf-metafunctors-grammar which is held as augmented-syntax in the
Diaconescu duality lemma ( "coreflective-metafunctors models" ).
Another further step shall be to GAP-and-COQ program
[[https://www.gap-system.org]] the computational logic for Tarski's
decidability in free groups and for convergence in infinite groups ...
Additionnally, the 1337777.OOO has discovered random dia-para-logic
discoveries [[1337777.OOO//1337777solution.txt]] and
information-technology [[1337777.OOO//init.html]]
[[1337777.OOO//init.pdf]] [[1337777.OOO//makegit.sh.org]]
[[1337777.OOO//editableTree.urp]] [[1337777.OOO//gongji.ml4]] based on
the _EMACS org-mode_ logiciel which enables communication of
_timed-synchronized_ _geolocated_ _simultaneously-edited_
_multi-authors_ _format-able_ _searchable_ text, and therefore
_personal email_ and _public communication_ of
_multiple-market/language_ (中文话)textual COQ math programming, and
which enables _personal archiving_ and _public archiving_ and
therefore _public reviews / webcitations_ .
Whatever is discovered, its format, its communication is
simultaneously some predictable-time (1337) computational-logical
discovery and some random-moment (777) dia-para-computalogical
Memo ( "prealables d'un debat" ) ref the unavoidable question : what
is the "ends" / "added-value" / "product" in mathematics ? The "ends"
in mathematics are commonly described as "education" and "research".
In reality the only "research" ( predictable-time
computational-logical discovery, "correct" ) is the engineering of
some computational logical computer program or the engineering of some
physical prototype, and the rest is "education" ( random-moment
dia-para-computalogical discovery, teaching "ideas" ) which is
amplified by the question of "audience"/"market" or universality of
the communication-language medium [[1337777.OOO//gongji.ml4]].
Unfortunately sometimes forced-fool-and-theft/lie/falsification (
"absence of reality" ) [[1337777.OOO//1337777solution.txt]] defeats
both "research" by preventing anything other than " .PDF binary files
with pretty greek-letters and large-vertical-symbols ", and defeats
"education" by preventing public-review (including public-students
review). The medium of this forced-fool-and-theft/lie/falsification
may be monetarist or "tribalistic" (interdependent) ... such that it
is common ( "maybe half" ) question whether one "tribalistic"
(interdependent) teacher's purely predictable-time
computational-logical discovery, steals/hides/injects some other
original-teacher's "idea" (random-moment dia-para-computalogical
discovery) ... and such that it is common ( "maybe half" ) question
whether the fabrication/falsification of some non-necessary
grade/bounty/reward is precisely to permit "tribalistic"
(interdependent) determinism ... or am I the 7ok3r ?
paypal 1337777.OOO at gmail.com , wechatpay 2796386464 at qq.com , irc #OOO1337777
More information about the Forum mailing list
|
{"url":"https://www.gap-system.org/ForumArchive2/2017/005491.html","timestamp":"2024-11-13T23:02:28Z","content_type":"text/html","content_length":"10912","record_id":"<urn:uuid:c4419832-12e3-46dc-9880-a9e18ff5a1f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00113.warc.gz"}
|
Excel Formula: Calculate Duration Between Dates
In this tutorial, we will learn how to calculate the duration between two dates in Excel using the DATEDIF function. The DATEDIF function is a useful tool for calculating the difference between two
dates in terms of months and days. By using this formula, you can easily determine the duration between any two given dates.
To calculate the duration between two dates, we will use the following formula: =DATEDIF(C5,D5,"M")&"M "&DATEDIF(C5,D5,"MD")&"D". Let's break down this formula step by step.
First, the DATEDIF function is used to calculate the difference between the start date (cell C5) and the end date (cell D5) in terms of months. The unit of measurement is specified as "M".
Next, the result of the first DATEDIF function is concatenated with the string "M " using the ampersand (&) operator. This adds the label "M " to the result.
Then, the second part of the formula calculates the difference between the start date and the end date in terms of days. The result is concatenated with the string "D" using the ampersand operator.
The final result is a string representation of the duration between the two dates in the format "X months Y days", where X is the number of months and Y is the number of days.
For example, if the start date is "01/01/2022" and the end date is "03/15/2022", the formula would return the result "2M 14D", indicating a duration of 2 months and 14 days between the two dates.
In conclusion, the DATEDIF function in Excel is a powerful tool for calculating the duration between two dates. By using this formula, you can easily determine the difference in months and days and
represent it as a string. This tutorial has provided a step-by-step explanation of the formula and an example to help you understand how to use it in your own Excel spreadsheets.
Formula Explanation
The given formula calculates the duration between two dates in terms of months and days. It uses the DATEDIF function to calculate the difference between the start date (cell C5) and the end date
(cell D5) in months and days. The result is then concatenated with the appropriate labels to form a string representation of the duration.
Step-by-step Explanation
1. The DATEDIF function is used to calculate the difference between two dates. It takes three arguments: the start date, the end date, and the unit of measurement.
2. In this formula, the unit of measurement is "M" for months. The first part of the formula, DATEDIF(C5, D5, "M"), calculates the difference between the start date (cell C5) and the end date (cell
D5) in terms of months.
3. The result of the first DATEDIF function is concatenated with the string "M " using the ampersand (&) operator. This adds the label "M " to the result.
4. The second part of the formula, DATEDIF(C5, D5, "MD"), calculates the difference between the start date (cell C5) and the end date (cell D5) in terms of days.
5. The result of the second DATEDIF function is concatenated with the string "D" using the ampersand (&) operator. This adds the label "D" to the result.
6. The final result is a string representation of the duration between the two dates in the format "X months Y days", where X is the number of months and Y is the number of days.
For example, if the start date (cell C5) is "01/01/2022" and the end date (cell D5) is "03/15/2022", the formula =DATEDIF(C5,D5,"M")&"M "&DATEDIF(C5,D5,"MD")&"D" would return the result "2M 14D",
indicating a duration of 2 months and 14 days between the two dates.
|
{"url":"https://codepal.ai/excel-formula-explainer/query/FyqnPfWj/excel-formula-calculate-duration-between-dates","timestamp":"2024-11-08T01:10:11Z","content_type":"text/html","content_length":"92081","record_id":"<urn:uuid:4bbdd3c6-5edf-4b45-a15f-71d03cc99355>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00795.warc.gz"}
|
Convert irr to interest rate
Loan Calculator — Calculate EMI, Affordability, Tenure & Interest Rate. EMI For the banks, it represents their internal rate of return (IRR) on the loan. If you pay Convert Flat Interest Rate (a.k.a
simple interest) to Effective Interest Rate here. Use Loanstreet's online interest rate calculator to calculate Personal Loans, Car
11 Feb 2019 Conceptually, IRR is the interest rate (r) that sets the net present value (NPV) of cash flows (CF) to zero. You'll notice that IRR focuses on how It is a modified version of our IRR
calculator that allows you to specify not only the value of each cash flow, but also the interest rate at your financing loan and 14 Sep 2019 IRR returns the nominal interest rate per period. are
not even, you still have to convert nominal to effective in order to compare the outputs. 20 Dec 2018 ROI and IRR are complementary metrics where the main difference between the the fact that you can
earn interest (which compounds) on your invested dollar. IRR is the rate of return that equates the present value of an 4 Sep 2018 The IRR of that calculation is the effective interest rate that our
fund has experienced. Let's get to layout. Here's the start of our table: There are
7 Jun 2006 To convert a monthly IRR into an annualized IRR number, you use the I need to calculate the effective interest rate, using compounding base
Convert Flat Interest Rate (a.k.a simple interest) to Effective Interest Rate here. Use Loanstreet's online interest rate calculator to calculate Personal Loans, Car By definition, IRR compares
returns to costs by finding the interest rate that produces a zero NPV for the investment cash flow stream. Not surprisingly, interpreting 6 Jun 2019 Internal rate of return (IRR) is the interest
rate at which the net present value of all the cash flows (both positive and negative) from a project or Explanation + example of calculating the interest rate implicit in the lease. So using simple
MS Excel formula IRR applied to the series of your cash flows 11 Feb 2019 Conceptually, IRR is the interest rate (r) that sets the net present value (NPV) of cash flows (CF) to zero. You'll notice
that IRR focuses on how It is a modified version of our IRR calculator that allows you to specify not only the value of each cash flow, but also the interest rate at your financing loan and 14 Sep
2019 IRR returns the nominal interest rate per period. are not even, you still have to convert nominal to effective in order to compare the outputs.
10 Aug 2009 IRR is Internal Rate of Return and it is used to calculate the returns I used XIRR to determine the effective interest rate (EIR) of a loan with Monthly to Yearly IRR conversion
calculated using '(1+r)^12 – 1' is close to XIRR().
IRR stands for internal rate of return and is used in capital budgeting to measure the potential profitability of an investment. It can be defined as the interest rate that makes the Net Present
Value (NPV) of all cash flows from the investment equal to zero. Calculate the IRR (Internal Rate of Return) of an investment with an unlimited number of cash flows.
2,500 converts to an Effective Interest Rate of 17.27% p.a.. This method is particularly used to calculate the interest payable for personal loans and vehicle loans.
Interest Rate / Internal Rate of Return (IRR) Calculator. Who This Calculator is For: Borrowers who want to know what interest rate (also know as internal rate of
Question: What's the annual interest rate? flow worksheet, enter the data, compute the IRR, and compute the NPV using an interest rate per period ( I ) of 20%.
Convert Flat Interest Rate (a.k.a simple interest) to Effective Interest Rate here. Use Loanstreet's online interest rate calculator to calculate Personal Loans, Car By definition, IRR compares
returns to costs by finding the interest rate that produces a zero NPV for the investment cash flow stream. Not surprisingly, interpreting 6 Jun 2019 Internal rate of return (IRR) is the interest
rate at which the net present value of all the cash flows (both positive and negative) from a project or Explanation + example of calculating the interest rate implicit in the lease. So using simple
MS Excel formula IRR applied to the series of your cash flows 11 Feb 2019 Conceptually, IRR is the interest rate (r) that sets the net present value (NPV) of cash flows (CF) to zero. You'll notice
that IRR focuses on how It is a modified version of our IRR calculator that allows you to specify not only the value of each cash flow, but also the interest rate at your financing loan and 14 Sep
2019 IRR returns the nominal interest rate per period. are not even, you still have to convert nominal to effective in order to compare the outputs.
IRR stands for internal rate of return and is used in capital budgeting to measure the potential profitability of an investment. It can be defined as the interest rate that makes the Net Present
Value (NPV) of all cash flows from the investment equal to zero. Calculate the IRR (Internal Rate of Return) of an investment with an unlimited number of cash flows. Interest Rate / Internal Rate of
Return (IRR) Calculator Who This Calculator is For: Borrowers who want to know what interest rate (also know as internal rate of return or IRR) they are paying when only the payments and term are
known. Internal rate of return (IRR) is the minimum discount rate that management uses to identify what capital investments or future projects will yield an acceptable return and be worth pursuing.
The IRR for a specific project is the rate that equates the net present value of future cash flows from the project to zero. The IRR is the interest rate (also known as the discount rate) that will
bring a series of cash flows (positive and negative) to a net present value (NPV) of zero (or to the current value of cash The internal rate of return (IRR) is the discount rate providing a net value
of zero for a future series of cash flows. The IRR and net present value (NPV) are used when selecting investments
|
{"url":"https://topoptionsudzi.netlify.app/knudson60481myro/convert-irr-to-interest-rate-53.html","timestamp":"2024-11-13T07:27:52Z","content_type":"text/html","content_length":"33749","record_id":"<urn:uuid:53279868-e5ff-4446-b97a-af81592e0ba0>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00030.warc.gz"}
|
Aliens Not So Strange
If the Martian life form transpires to be eerily similar, this might only show that Life … in reality has very few options. … No sentient forms weaving their existence in vast interstellar dust
clouds, farewell to bizarre filamentous species greedily soaking up the intense magnetic fields of a crushingly oppressive neutron star and on even Earth-like planets no forms that we might as well
call conceptualized pancakes. … Contrary to received neo-Darwinian wisdom, life on Earth at any level of organization—from molecular to societal— will provide a remarkably good guide as to what ought
to be ‘out there’.
So argues Simon Conway Morris, from inside view considerations. I think he’s mostly right, but based on an outside view.
Here it is: when relevant parameters can vary by large magnitudes, the most common type of thing is often overwhemingly more common. For example, processes that create and transmute elements vary
greatly in their rates. So even though there are over a hundred elements in the periodic table, over 90% of all atoms are hydrogen, so the odds that two randomly selected atoms are the same element
is >80%.
Similarly, since the influences on how many eyes a human has vary greatly across eye numbers, most humans have the same number of eyes: two. Most humans do not have the same last name, however, since
rates of gaining or changing names do not vary by huge factors.
The same principle applies to life. Life might have evolved in a great many kinds of environments, based on a great many sorts of elements, and using many types of organization. To the extent that
some environments are far more common, or are far more supportive of high rates of biological activity, most biological activity in the universe should occur in the few most common and supportive
environments. Similarly if some elements or organizations are far more supportive of biological activity and innovation, most life should use those elements and organization.
I expect cosmic environments to vary enormously in both volume and in support for biological activity. I also expect some types of elements and organizations to be far more supportive of biological
activity and innovation. I thus expect most life to be based on similar elements and organizations, to originate and be active and innovative in places similar to where our life orginated and is most
active and innovative.
This view is supported by the fact that the assumption that life originates via the entropy of sunlight hitting “dust” predicts many cosmological parameters. In ’08 I reported:
This causal entropic principle so far successfully predicts dark energy strength, matter fluctuation ratio, baryonic to dark matter ratio, and baryonic to photon matter ratio! … A simple reading of
the principle is that since observers need entropy gains to function physically, we can estimate the probability that any small spacetime volume contains an observer to be proportional to the entropy
gain in that volume. … Exclud[ing] entropy of cosmic and black holes horizons, … ignor[ing] future observers getting far more efficient and aggressive in using entropy, … they estimate that, aside
from decaying dark matter, near us most entropy is made by starlight hitting dust, and most of that is in the past.
Our life probably started from sunlight hitting “dust” (including planets). More quotes from Simon Conway Morris:
Will the extra-terrestrials be utterly familiar, completely alien (whatever that is supposed to mean) or is the search a complete waste of time? What will it be? Worlds full of shoppers and
celebrities, biological constructions so unfamiliar that they are only brought home by accident and then inadvertently handed over for curation in a department of mineralogy or an exercise in
galactic futility as one sterile world after another rolls beneath the spaceship windows. …
Given the likely range of planetary environments, such as a 100 km deep ocean or an atmosphere substantially denser than that of Venus, what fraction of any potentially habitable biosphere is
actually occupied? Is the terrestrial ‘habitation box’ only a small proportion of all of biological occupancy space or, alternatively,has life here more or less reached the limits of what is possible
anywhere? …
What we find here, therefore, will be a reliable guide to what we will find anywhere. Paradoxically, confidence that this may be correct comes from the dramatic increase in our knowledge of so-called
extremophiles. … It may be that the current thermal limit (ca 120◦ C) of microbial activity [10] may not be much exceeded. In part, this is because water at this temperature is necessarily
pressurized, and the equivalent limits of microbial habitation in the Earth’s crust (e.g. [11,12]) may not exceed ca 5 km (equivalent to ca 110 MPa; see also below) and an ambient temperature
(depending on the local geothermal gradient) of at least 120◦ C. … While the environmental extremes of these and a few other multicellular organisms are impressive [18], the overall size of the
habitation box for eukaryotes is unsurprisingly substantially smaller than that of life as a whole. … For life as a whole, there may be no lower limit in as much as at increasingly lower temperatures
normal growth then yields to physiological maintenance, and ultimately dormancy where ‘coincidentally’ rates of DNA and protein repair are equal to those of macromolecular deterioration. …
A more fundamental question, however, is whether because of locally contingent circumstances terrestrial life just happens to occupy some fraction, perhaps very small, of the total carbaquist
habitation box. As we have already seen, however, in the case of minimum temperatures, pH range, salinities and desiccation, arguably the defined limits for all carbaquists have been reached by life
on Earth, and with somewhat less certainty this applies also to hyperthermophiles. … there is little evidence of microbial viability significantly in excess of the tolerances seen in terrestrial
piezophiles. … Given that at least in terms of carbaquist life it is likely that lipid membranes are universal, this suggests that viability may not extend much beyond the deepest oceanic trenches
(ca 11 km) or equivalent pressure zone within the crust of the Earth (ca 5 km). But the viability of lipids is not the only problem. Another potential constraint of the habitation box is the
behaviour under different temperature and pressure regimes of hydration water essential to biomolecular function. Not only is the optimal zone remarkably narrow, with that for temperature being
curiously coincidental in both micro-organisms and homeotherms (ca 36–44◦ C), but the phase diagram for hydration water is circumscribed and little larger than the terrestrial habitation box.
If I had to place bets, it would be that life on other planets, would be dangerously similar to ours. As for 're-rolling the dice' on earth, every species of birds, reptiles, primates, etc, would
evolve differently, however, they would still, end up, very similar to what we have now. They would all end up with 2 eyes, one mouth, one nose, because it works best.
Consider this, all, alien, automobiles, would also go through a similar evolution as ours have. There is no way around it. All cars need 4 wheels, a body, steering wheel, etc. The first ' steering
wheel' was a stick. then a round wheel. The first materials, out of necessity would be metal, wood, leather, etc. This would make the first cars on other planets, almost identical to the model A,
on earth. . How could an automobile, built be aliens, on another planet, evolve any differently, than ours have, on earth?? The same is probably true with life's chemical process's. We could
probably interbreed with aliens also.
Expand full comment
Interesting paper on a thermodynamic origin of life. Aliens could even have RNA and DNA.
Expand full comment
14 more comments...
|
{"url":"https://www.overcomingbias.com/p/aliens-not-so-strangehtml","timestamp":"2024-11-04T15:31:29Z","content_type":"text/html","content_length":"162469","record_id":"<urn:uuid:f5b0cbdf-af22-48cd-8fdb-33d735a906f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00843.warc.gz"}
|
Watts to Kilowatts- Understanding Electricity Bills Better -
Watts to Kilowatts- Understanding Electricity Bills Better
How Many Watts in Kilowatts & Energy Bills Explained…
reducing your electricity bill, you need to know where you’re starting from and how much electricity you are consuming in the first place.
Since electricity and its related terminology are not things that are very well known to an average person, this can seem a bit intimidating at first. But with a few basic terms known and understood,
it is actually quite easy to make sense of your electric bill.
The first and most important thing for you to understand is the term “kilowatt hour.”
What & How much is a Kilowatt Hour?
How Many Watts In a Kilowatt?
This system of measurement uses the metric system. So a kilowatt is equal to 1000 watts. Now watts and kilowatts are measured with watt hours and kilowatt hours. This is how much electricity is
consumed over the given course of an hour.
Let’s go back to the light bulb example to illustrate this. Let’s assume you have a 100 watt light bulb that runs for 10 hours a day. At the end of each day this light bulb would have consumed one
kilowatt hour of energy. 100 watts times 10 equals 1 kilowatt. The math is actually pretty simple.
This is how your power company measures how much electricity you are consuming. If you look at your bill, you will see the total number of kilowatt-hours you consumed.
Now that you know what a kilowatt is you can begin to figure out how much electricity each of your appliances is using.
How Do Electric Companies Calculate Electric Energy Used?
How Much Does A Kilowatt Hour Cost? Answer: Average in the USA= 12 cents
Depending on where you live in the USA depends on your kilowatt hour cost rate. Hawaii residents have to pay about 24 cents per kilowatt hour which is higher than anywhere else in the country. Other
states and regions pay only eight cents per kwh. Central Florida where I live is 11 cents per kilowatt hour. But in Canada, we have been hearing reports of 26 cents per kilowatt hour. And power rates
are climbing at an alarming rate. Just last week a gentleman called me from Delaware and said his rates were doubling in the next few years.
The average for the nation is 12 cents per kilowatt hour. But again, your rate may vary greatly from this.
Another thing to consider is that your electric company charges you based on how much electricity you use. This means that once you use above a certain amount of electricity, your rate actually goes
This information can also be found on your electric bill. If you have any questions about this you can always call your power company and have it explained to you so that you understand how much you
are exactly paying for electricity. But don’t expect them to give you tons of great information on using less electricity… that’s like going to a car lot and asking the car salesman, ” Do I need to
buy this car?” They are trying to sell cars so of course their answer will always be yes! The power company is there to provide you a commodity, electricity. So asking, ” How can I use less of what
you sell? ” is not going to get you many workable answers.
Now that you know what a kilowatt hour is and how the electric company calculates your usage, it’s time to find out exactly WHAT is costing so much in your home or business and HOW to change that
immediately so you can stop paying such high energy bills. Let’s start focusing on how you can pay less of your hard earned dollars to your power company. Continue Reading… Why is My Electric Bill So
High? Free Tips to Lower Energy Bills Easily. >>
|
{"url":"https://www.electricsaver1200.com/bills/","timestamp":"2024-11-09T01:02:47Z","content_type":"text/html","content_length":"254024","record_id":"<urn:uuid:c6a1afc1-ca15-452c-b679-9e42d75f1a84>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00617.warc.gz"}
|
Design of Minimum Length Nozzle Using Method of Characteristics
Volume 10, Issue 05 (May 2021)
Design of Minimum Length Nozzle Using Method of Characteristics
DOI : 10.17577/IJERTV10IS050268
Download Full-Text PDF Cite this Publication
S. Asha , G. Dhathri Naga Mohana , K. Sai. Priyanka , D. Govardhan, 2021, Design of Minimum Length Nozzle Using Method of Characteristics, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY
(IJERT) Volume 10, Issue 05 (May 2021),
• Open Access
• Authors : S. Asha , G. Dhathri Naga Mohana , K. Sai. Priyanka , D. Govardhan
• Paper ID : IJERTV10IS050268
• Volume & Issue : Volume 10, Issue 05 (May 2021)
• Published (First Online): 28-05-2021
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Design of Minimum Length Nozzle Using Method of Characteristics
S. Asha1, G. Dhathri Naga Mohana2, K. Sai Priyanka3, D. Govardhan4
Research student1, Research student2, Assistant professor3, Head of the department4, Department of Aeronautical Engineering, Institute of Aeronautical Engineering, Telangana, India.
Abstract: The nozzle efficiency is largely affected by the nozzle contour. As a result, an optimum geometrical design of a solid rocket motor nozzle is designed in order to achieve maximum thrust and
velocity. Two types of nozzle exit configurations are well-thought-out in the design process, conical and contoured. A conical nozzle is used for its simplicity of production, but a contoured nozzle
is preferred because it increases performance while reducing weight, a contour nozzle also adjusts the flow such that the exhaust product is in a less or more axial direction, reducing divergence
losses. In this case, we'll look at a contour-shaped nozzle that can optimise thrust as needed. Process of Designing a contour nozzle (minimum length nozzle) utilising method of characteristics (MOC)
is carried out MOC is preferred owing to its widespread usage in the industry.
Keyword: Solid rocket motor, Nozzle, Trust, contour nozzle, minimum length nozzle, Method of characteristics.
Rocket propulsion systems are classified in a variety of ways, including chemical, nuclear, and solar energy sources, as well as fundamental functions such as booster stages, sustained or upper
stages, attitude control, orbit station maintaining, and the sort of vehicle they drive (aircraft, missile, assisted take off, space vehicle, etc.) or according to their size, propellant type,
structure, or the number of rocket propulsion units utilised in a specific vehicle, as well as the thrust generation mechanism. The kind of propellant utilised in a rocket can be used to classify
the propulsion system. Liquid and solid propellants are the two most common kinds.
Solid propellant rocket motor
Solid propellants (fuel/oxidizer) have been utilised in military applications such as missiles because they can be stored for a long period without considerable propellant deterioration and
because they can be launched reliably most of the time. The grain is a solid propellant (or charge) that includes all of the chemical components required for full combustion. It is designed to
burn smoothly and at a predefined
pace on all exposed internal grain surfaces once ignited. As the propellant is burnt and consumed, the inside cavity increases. The heated gases that arise travel through the supersonic nozzle,
providing thrust. There are no feed systems or valves, and there are no or very few moving components throughout the system.
2. NOZZLE
The exhaust gas is accelerated out of the nozzle to create thrust in a divergent diverging nozzle, also known as a DE LAVAL nozzle. The heat generated in the combustion chamber of a solid rocket
engine is generally converted into kinetic energy in the exhaust using a convergent divergent nozzle. Newton's 3rd law of motion describes how a rocket engine employs a nozzle to accelerate
heated exhaust. The mass flow rate through the engine, the exit velocity of the flow, and the pressure at the engine's exit determine the amount of thrust generated. The rocket nozzle design
determines the value of these three flow variables. The fundamental thermodynamic principles may be used to estimate throat area, nozzle half angle, and expansion ratio for simple nozzles with
non-contoured conical outputs.
To decrease divergence loss, enhance particular impulse somewhat, and minimise nozzle length and mass, a more sophisticated contoured (or bell-shaped) nozzle is utilised.
Minimum Length Supersonic Nozzle
Supersonic nozzles are used to increase a flow to desired supersonic levels in a number of technical applications. The use of supersonic nozzles is possible. There are two categories of nozzles:
progressive-expansion nozzles and nozzles with a minimum length. Gradual-expansion nozzles are widely employed in situations where maintaining a high- quality flow at the appropriate exit
conditions is critical. For other applications, the huge weight and length penalties associated with gradual expansion nozzles make them impractical, hence minimum-length nozzles with a sharp
corner to give the initial expansion are usually utilised. The flow may be separated into simple and non-simple areas for both gradual-expansion and minimum-length nozzles. Mach wave reflections
and intersects describe a non-simple area. It is preferable to limit the non-simple zone as much as feasible in order to achieve the criterion of consistent conditions at the nozzle outlet. This
can be accomplished by designing the nozzle surface so that no Mach waves (e.g., characteristics) are generated or reflected while the flow is straightened. As a result, the Method of
Characteristics is used to build a supersonic nozzle that fits these parameters. The design of both a gradual-expansion and a minimum- length nozzle is illustrated in this paper.
3. NOZZLE DESIGN
For designing a profile of a nozzle Different methods have been proposed, Method of characteristics and G.V.R. Rao approximation method. Method of characteristics is the
method thats discussed below for designing a minimum length nozzle.
General Theory
The nonlinear differential equation of the velocity potential can be used to explain the physical circumstances of a two- dimensional, steady, isentropic, irrotational flow. The technique of
characteristics is a mathematical framework that may be used to discover solutions to the aforementioned velocity potential while meeting certain boundary conditions, resulting in the governing
partial differential
Equation 3.2 shows the velocity potential by definition, and then equations 3.3 and 3.4 are derived.
equations (PDEs) becoming ordinary differential equations (ODEs).
Method of Characteristics
In a supersonic flow, characteristics are lines' oriented in specified directions along which disturbances (pressure waves) propagate. The Method of Characteristics (MOC) is a numerical approach
for addressing two-dimensional compressible flow problems, among other things. Flow parameters such as direction and velocity may be estimated at different sites across a flow field using this
The following are the three qualities of characteristics:
Property 1: A curve or line in a two-dimensional supersonic flow along which physical disturbances propagate at the local speed of sound relative to the gas is a characteristic.
Property 2: A characteristic is a curve along which the flow qualities are continuous, even though the initial derivatives are discontinuous, and the derivatives are indefinite.
Property 3: A characteristic is a curve that may be used to convert the controlling partial differential equations into an ordinary differential equation (s).
Minimum Length Nozzle (MLN) using Method of Characteristics (MOC)
Characteristic Lines
Two-Dimensional Irrotational Flow For steady, two- dimensional, irrotational flow, Equation 3.1, which is the whole velocity potential
equation, is used to determine the characteristic lines. Note that is the velocity potential.
Equations 3.6, 3.7, and 3.8 are produced by substituting the equation 3.2 into equations 3.1, 3.3, and 3.4.
Equations 3.6 over 3.8 are a system of snchronized, linear, algebraic equations in the variables xx, yy, and xy. By applying Cramers rule, the explanation for xy is to be the following equation
A point A and its surrounding neighbourhood in an arbitrary flow field are represented in Figure 3.1. The velocity potential's derivative,xy, has a specific value at point A. The solution for xy
at point A for an arbitrary dx and dy option, and dy for an arbitrary direction away from point A specified by the dx and dy option. There are equivalent values of the change in velocity du and
dv for the given dx and dy.
Figure 3.1 streamline geometry
The slope of the characteristic lines is shown below in equation 3.10 and equation 3.11.
There are three key points to remember:
1. When M > 1, each point of the flow field has two actual characteristics. Furthermore, equation 3.1 is a hyperbolic partial differential equation in this case.
2. If M = 1, each point of the flow has just one actual feature. A parabolic partial differential equation is Equation 3.1.
3. If M is less than one, the properties are imaginary, and equation 3.1 is an elliptic partial differential equation.
Because each point in a flow has two real characteristics, the method of characteristics (MOC) becomes a suitable strategy for solving supersonic flow. The equation 3.10 for a steady,
two-dimensional supersonic flow is investigated. Study the streamline as shown in Figure 3.1. At point A, u = Vcos and v = Vsin Equation 3.10 develops equation 3.12 shown below.
Since, sin (1/ M), equation 3.12 is obtained as follows.
Therefore, the following equation 3.13 is obtained.
The slope of the characteristic lines is given by equation 3.14, which is derived from trigonometry and algebra.
Figure 3.2 depicts a graphical representation of equation
3.14. The streamline establishes an angle with the x axis at point A. There are two characteristics that pass-through point A, one at an angle above the streamline and the other at an angle below
Mach lines are the distinguishing lines. The angle + characteristic is referred to as a C+ characteristic, and it is a left-running characteristic. A C- characteristic is a right- running
characteristic that is given by the angle – Because the flow parameters change from point to point in the flow, the characteristics are curved.
Figure 3.2 left and right characteristic lines
Compatibility Equations
The compatibility equation is the following equation 3.15
To maintain the flow field derivatives small, N is zero only when D is zero, according to equation 3.9. When D = 0, only directions along the characteristic lines are examined, and when N = 0,
only directions along the characteristic lines are examined. As a result, equation 3.15 is only valid along characteristic lines. As a result, equation 3.16 is as follows:
The compatibility equation therefore is shown in equation
3.18 below
The equation's negative version is applied to the C- characteristic, whereas the positive form is applied to the C+ characteristic. For Prandtl-Meyer flow, equation 3.18 is equal to equation 3.19
given below.
Equation 3.19 is substituted by algebraic compatibility equations 3.20 and 3.21 since the equations are equivalent
Compatibility Equations Point by Point Along the Characteristics
Internal Flow
If the flow field conditions are known at two points in the flow, the conditions at a third point can be found. The third point is located by the intersection of the C- characteristic through the
first point and the C+ characteristic through the second point, as shown in Figure 3.3 below.
Figure 3.3-unit process for MOC
The and for the third point are found in terms of the known values of K+ and K- as shown in equation 3.22 and equation 3.23 below
As a result of the known values at the first and second points, the flow conditions at the third point may now be calculated. Equation 3.24 is used to get the Mach number.
After calculating the Mach number, the pressure, temperature, and density may be determined using isentropic flow relations, as illustrated in the equations below.
Assume that the characteristics are straight-line segments connecting the grid points, with average slopes. C- characteristic across the first point is drawn as a straight line with an average
slope angle. As indicated in the equation below, the C+ characteristic across the second point is drawn as a straight line with an average slope angle.
Supersonic Nozzle Design
In order to increase the speed of an internal steady flow via a duct from subsonic to supersonic, the duct must be convergent-divergent in form, as shown in Figure 3.4.
Assume that the sonic line is perfectly straight. In the throat, the flow increases to sonic speed. The duct splits downstream of the sonic line. The expansion section in Figure 3.5 is shortened
to a point in minimal length nozzles, and the expansion occurs by a centred Prandtl-Meyer wave emerging from a sharp-centre throat with an angle wmax,
ML, as shown in Figure 3.5. The length of the supersonic nozzle, L, is the smallest value that can be achieved while maintaining shock-free, isentropic flow.
Assume that the exit Mach numbers of the nozzles in Figures
3.4 and 3.5 are the same. The expansion contour of the minimum-length nozzle depicted in Figure 3.5 exhibits a sharp corner at point a. Only two wave systems meet the fluid: right-running waves
from point a and left-running waves from point d. In equation 3.25, the wall's expansion angle downstream of the neck is illustrated.
Figure 3.4 schematic of supersonic nozzle design by MOC.
Figure 3.5 schematic of minimum length nozzle
It is point e when the centreline Mach number matches the design exit Mach number. The expansion section comes to an end at point c, which fixes its length as well as the value of wmax. The
number of nodes is calculated using the formula below.
Figure 3.6 Schematic of characteristic lines for minimum length nozzle
4. RESULTS AND DISCUSSION
By compressing the expansion section, the design of a minimal length supersonic nozzle is capable of constructing a minimal length nozzle.
The entire length of the nozzle is reduced when the expansion portion is contracted. The length of the supersonic nozzle is kept to a minimum in the preceding design because the expansion section
is kept to a minimum.
In actuality, the expansion part has been compressed to point at the throat's end. using the Prandtl-Meyer equation the characteristic lines are solved in MATLAB all the points are plotted
against nozzle axis in which different characteristic lines are seen
Figure. 4.1 plot of the contour curve with the characteristic lines in MATLAB
Figure 4.2 plot of wall points
Figure 4.3 points reflected on the wall
These points are later used for plotting a 2D domain FOR CFD analysis You must also choose an optimal number of features, which should be large enough to form a curved (bell-shaped) contour
towards the end of the process. Make sure your lines are straight. MOC's overall efficacy may be measured.
5. CONCLUSION
There are a variety of uses for supersonic nozzles. Normally, they are exposed to a complicated flow pattern. To achieve the high precision and huge calculations required by current high-speed
applications, a computer is required. As a result, a computerised approximation method could be a preferable way to address such an issue. The most acceptable approach to utilise with the
supersonic nozzle design is the characteristic method.
By doing several iterations, we can improve our results. Slight modfications in the input parameters algorithm We were able to effectively use the method of characteristics to produce a nozzle
shape for a nozzle with a minimum length. The nozzle points acquired may be imported into any CAD programme for further refining and production. To achieve the greatest outcomes in line with the
input specification, it is strongly suggested to combine this strategy with an iterative design approach.
6. REFERENCES
1. Sreenath, K.R. and Mubarak, A.K., 2016. Design and analysis of contour bell nozzle and comparison with dual bell nozzle. International Journal of Research and Engineering, 3(6), pp.52-56.
2. Khan, M.A., Sardiwal, S.K., Sharath, M.S. and Chowdary, D.H., 2013. Design of a supersonic nozzle using method of characteristics. International Journal of Engineering and Technology, 2,
3. Kulhanek, S.L., 2012. Design, analysis, and simulation of rocket propulsion system (Doctoral dissertation, University of Kansas).
4. Mon, K.O. and Lee, C., 2012. Optimal design of supersonic nozzle contour for altitude test facility. Journal of mechanical science and technology, 26(8), pp.2589-2594.
5. Brown, C.R., 2019. Preliminary Nozzle Design for use in a Small- Scale, High Mach Number Wind Tunnel (No. LLNL-TH-787627). Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States).
6. Anderson Jr, J.D., 2010. Fundamentals of aerodynamics. Tata McGraw-Hill Education.
7. Matyka, M., 2003. Prandtl-Meyer Expansion Wave. Computational Physics Section of Theoretical Physics at University Wroclaw in Poland.
8. H. W. Liepmann and A. Roshko, Elements of gas dynamics, Seventh Ed. John Wiley & Sons, Inc, New York (1996) 284-304.
9. M. Going, Nozzle design optimization by method-of- characteristics, AIAA paper 90-2024 (1990).
You must be logged in to post a comment.
|
{"url":"https://www.ijert.org/design-of-minimum-length-nozzle-using-method-of-characteristics","timestamp":"2024-11-13T08:47:38Z","content_type":"text/html","content_length":"79926","record_id":"<urn:uuid:017a8351-3613-423a-9ff1-aee2214dd274>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00717.warc.gz"}
|
User Studies of Principled Model Finder Output
Tags: Verification, User Studies
Posted on 01 July 2017.
For decades, formal-methods tools have largely been evaluated on their correctness, completeness, and mathematical foundations while side-stepping or hand-waving questions of usability. As a result,
tools like model checkers, model finders, and proof assistants can require years of expertise to negotiate, leaving knowledgeable but uninitiated potential users at a loss. This state of affairs must
One class of formal tool, model finders, provides concrete instances of a specification, which can guide a user’s intuition or witness the failure of desired properties. But are the examples produced
actually helpful? Which examples ought to be shown first? How should they be presented, and what supplementary information can aid comprehension? Indeed, could they even hinder understanding?
We’ve set out to answer these questions via disciplined user-studies. Where can we find participants for these studies? Ideally, we would survey experts. Unfortunately, it has been challenging to do
so in the quantities needed for statistical power. As an alternative, we have begun to use formal methods students in Brown’s upper-level Logic for Systems class. The course begins with Alloy, a
popular model-finding tool, so students are well suited to participate in basic studies. With this population, we have found some surprising results that call into question some intuitively appealing
answers to (e.g.) the example-selection question.
For more information, see our paper.
Okay, that’s student populations. But there are only so many students in a class, and they take the class only so often, and it’s hard to “rewind” them to an earlier point in a course. Are there
audiences we can use that don’t have these problems? Stay tuned for our next post.
|
{"url":"https://blog.brownplt.org/2017/07/01/fmtools-usability.html","timestamp":"2024-11-07T09:36:58Z","content_type":"text/html","content_length":"19747","record_id":"<urn:uuid:afc2e76b-b4ab-4bfe-8c45-9e54bc5ad937>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00524.warc.gz"}
|
Angle Bisectors: Definition, Properties, and Theorems
Angle Bisectors: Definition, Properties, And Theorems
An angle bisector divides an angle into two equal (congruent) angles. It is a line that passes through the vertex of the angle and divides it into two equal parts. The Angle Bisector Theorem states
that if a line bisects an angle, then the two angles formed are congruent. The Angle Bisector Interior Angles Theorem states that if a line bisects an angle, then it also divides the opposite side
into two equal segments. The Angle Bisector Exterior Angles Theorem states that if a line bisects an angle, then it also bisects the supplement of that angle.
Unlocking the Secrets of Geometry with Angle Bisectors and Perpendicular Bisectors
In the realm of geometry, precision is paramount. And when it comes to defining shapes and dividing them into equal parts, two crucial concepts emerge: angle bisectors and perpendicular bisectors.
These geometric marvels play a pivotal role in understanding the intricacies of shapes, making them indispensable tools for architects, engineers, and geometry enthusiasts alike.
Angle Bisectors: Dividing Angles with Precision
An angle bisector is a line or ray that divides an angle into two equal parts. Imagine a pie cut into two perfectly symmetrical slices, with the angle bisector acting as the knife that makes this
division happen. The Angle Bisector Theorem states that if a ray bisects an angle, it also divides the opposite side of the angle into two equal segments. This property makes angle bisectors
invaluable for constructing congruent segments and angles.
Perpendicular Bisectors: Finding Midpoints and More
A perpendicular bisector is a line or ray that intersects a line segment at its midpoint and is perpendicular (at a 90-degree angle) to the line segment. Think of a line segment as a stick, and the
perpendicular bisector as a ruler placed perpendicular to the stick at its exact center. The Midpoint Theorem proves that the perpendicular bisector of a line segment always passes through its
midpoint. Leveraging this concept, perpendicular bisectors are used to locate midpoints and divide line segments into equal parts.
The Power of Angle Bisectors and Perpendicular Bisectors in Geometry
The significance of angle bisectors and perpendicular bisectors in geometry cannot be overstated. They are essential for:
• Constructing congruent segments and angles: Angle bisectors ensure equal division of angles, while perpendicular bisectors guarantee equal division of line segments.
• Locating midpoints and dividing line segments: By finding the intersection of perpendicular bisectors, we can pinpoint the midpoint of any line segment and divide it into equal parts.
• Solving geometric problems: Angle bisectors and perpendicular bisectors provide a systematic approach to solving complex geometric problems, unlocking the secrets of shapes.
Angle bisectors and perpendicular bisectors are indispensable tools in the geometer’s toolbox. They bring precision and symmetry to the study of shapes, enabling us to understand and construct
complex geometric figures with ease. Their applications extend far beyond geometry, impacting fields such as architecture, engineering, and even everyday life. By embracing these geometric marvels,
we gain a newfound appreciation for the beauty and order that surrounds us in the physical world.
Angle Bisectors
• Angle Bisector Theorem: Definition and explanation
• Angle Bisector Interior Angles Theorem: Statement and proof
• Angle Bisector Exterior Angles Theorem: Statement and proof
Angle Bisectors: Unlocking Symmetry in Geometry
In the realm of geometry, understanding angle bisectors is paramount for unlocking the secrets of symmetry and shape construction. An angle bisector is a line or ray that divides an angle into two
congruent (equal) angles. This fundamental concept plays a pivotal role in various geometrical theorems and applications.
The Angle Bisector Theorem states that if a ray bisects an angle, then it divides the opposite side into segments proportional to the adjacent sides. This theorem holds immense significance in
construction problems, where it can be used to divide segments and construct parallel lines.
Another crucial theorem is the Angle Bisector Interior Angles Theorem, which establishes that if an angle bisector is drawn in a triangle, then the ratio of the lengths of the opposite sides is equal
to the ratio of the adjacent sides. This theorem provides a powerful tool for solving triangle problems involving proportions.
Furthermore, the Angle Bisector Exterior Angles Theorem asserts that if an angle bisector is drawn in a triangle, then the exterior angle formed at the vertex where the angle bisector meets the
opposite side is equal to the sum of the two non-adjacent interior angles. This theorem is particularly useful in finding missing angle measures in triangles.
By harnessing the power of angle bisectors, we can unlock a wide array of geometrical constructions and solutions. From partitioning line segments to constructing congruent angles, these concepts
form the cornerstone of many applications in geometry and related fields. Whether you’re an aspiring architect or a curious student, understanding angle bisectors is a transformative step in your
geometric journey.
Unveiling the Secrets of Perpendicular Bisectors
In the realm of geometry, angle bisectors and perpendicular bisectors emerge as indispensable tools, revealing hidden relationships and unlocking a world of possibilities. Among them, perpendicular
bisectors stand out as masters of symmetry and precision, offering unparalleled insights into the anatomy of line segments.
Distance from a Point to a Line: The Symmetry Unveiled
Imagine a line segment standing before us, a symmetrical warrior with endpoints at opposite ends. Now, let’s introduce a brave perpendicular bisector, a line that cuts the segment in half at its very
midpoint. This bisector exhibits a fascinating property: it is equidistant from the segment’s endpoints.
This distance remains constant, creating a sense of symmetry around the midpoint. The perpendicular bisector serves as a dividing line, partitioning the segment into two congruent halves, each
sharing the same distance from the bisector.
Midpoint: The Heart of the Line Segment
Nestled at the heart of the bisected segment lies a point of pivotal importance—the midpoint. This point marks the exact center of the segment, where the two halves meet in perfect balance. It is
the meeting ground where the perpendicular bisector and the line segment intersect, forming a cross-shaped symmetry.
Midpoint Theorem: The Proof of Symmetry
The Midpoint Theorem stands as an unshakeable pillar in the world of geometry. It boldly proclaims that any point lying on the perpendicular bisector of a line segment must be its midpoint. In other
words, the perpendicular bisector acts as a celestial compass, always pointing towards the midpoint with unwavering accuracy.
This theorem forms the cornerstone of our understanding of symmetry in line segments. It provides a logical framework for determining the exact location of the midpoint, ensuring precision in
geometrical constructions and measurements.
Applications: Transforming Geometry
The practical applications of perpendicular bisectors extend far beyond the confines of textbooks. In the real world, they serve as invaluable tools for engineers, architects, and designers seeking
to create symmetrical structures and precise measurements.
Locating Midpoints: Perpendicular bisectors empower us to pinpoint the midpoints of line segments with unparalleled accuracy. This knowledge is crucial in dividing segments into equal parts, ensuring
balance and harmony in constructions.
Constructing Symmetry: Angle bisectors become indispensable allies when the task is to construct congruent lines and segments. They guide us in creating symmetrical shapes and designs, bringing order
and beauty to the world around us.
Perpendicular bisectors and their unwavering commitment to symmetry and precision have earned them an indispensable spot in the realm of geometry. Their ability to reveal distances, locate midpoints,
and construct symmetrical shapes makes them indispensable tools for students, engineers, and artists alike.
As we delve deeper into the world of geometry, may we always remember the power of perpendicular bisectors, the silent guardians of symmetry and the architects of precision in the geometrical
• Using angle bisectors to construct congruent lines and segments
• Using perpendicular bisectors to locate midpoints and divide line segments into equal parts
Applications of Angle Bisectors and Perpendicular Bisectors: Tools for Construction and Measurement
In the realm of geometry, angle bisectors and perpendicular bisectors play pivotal roles in constructing congruent lines, dividing line segments, and uncovering hidden patterns. These geometric tools
serve as essential aids for architects, engineers, and anyone seeking precision in their measurements.
Let’s delve into the practical applications of these geometric constructs:
Angle Bisectors: Constructing Congruent Lines and Segments
An angle bisector, by definition, divides an angle into two equal parts. This property makes it an invaluable tool for creating congruent lines and segments. By constructing an angle bisector, you
can effectively create two lines or segments with the same length and direction. This is crucial in architecture, where symmetries and precise measurements are paramount.
Perpendicular Bisectors: Locating Midpoints and Dividing Segments Equally
A perpendicular bisector, as its name suggests, is a line that intersects a line segment at its midpoint and is perpendicular to the segment. This concept proves invaluable when you need to divide a
line segment into two equal parts. By constructing a perpendicular bisector, you can quickly and accurately locate the midpoint, ensuring equal distribution of length on either side of the segment.
Angle bisectors and perpendicular bisectors are indispensable tools in geometry, providing a means to construct congruent lines, locate midpoints, and divide line segments with precision. Their
applications extend beyond the classroom, into the practical realms of architecture, engineering, and any discipline where precise measurements are essential. Understanding these concepts not only
enhances your geometric knowledge but also empowers you with the ability to solve real-world problems with accuracy and confidence.
Leave a Comment
|
{"url":"https://www.bootstrep.org/angle-bisectors-definition-properties-theorems/","timestamp":"2024-11-10T09:15:30Z","content_type":"text/html","content_length":"153508","record_id":"<urn:uuid:da7bde12-007e-4dea-9390-32de27bb4c1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00137.warc.gz"}
|
Area of Circle in Python | StudyMite
Program for Area of Circle in Python
In this program, with the help of python, we are calculating the area of the circle.
To find the area of a circle, you can use the formula "area = π * r * r", where "r" is the radius of the circle and "π" is the mathematical constant pi, which is approximately equal to 3.14. This
formula gives the number of square units that fit inside the circle.
For example:
Input: r = 5
Output: 3.14 * 5 * 5 = 78.5
1. Prompt the user to input the radius of the circle.
2. Store the radius in a variable. This will allow you to use the value in your calculations.
3. Calculate the area of the circle using the formula "area = π * r * r", where "r" is the radius of the circle and "π" is the mathematical constant pi, which is approximately equal to 3.14.
4. Print the calculated area to the screen.
5. End the program.
r = int(input("Enter the radius:"))
area = 3.14*r*r
print("The area is:", area)
Enter the radius : 2
The area is : 12.56
|
{"url":"https://www.studymite.com/python/examples/program-to-find-area-of-circle-in-python","timestamp":"2024-11-04T09:16:23Z","content_type":"text/html","content_length":"43440","record_id":"<urn:uuid:02a4d3ef-449d-43f8-80ff-cbd0573c48f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00872.warc.gz"}
|
Logical Causality in Quantum Mechanics
Principal Investigator: Michael Epperson. Co-Investigators: David Finkelstein, Professor, Department of Physics, Georgia Institute of Technology; Henry P. Stapp, Lawrence Berkeley National
Laboratory; Timothy Eastman, NASA-Goddard. Click here for Project Description. Supported by a grant from the Fetzer-Franklin Fund (Grant D11C36).
This project explores the mutually implicative relationship of the causal and logical orders in quantum mechanics and its connection to the mutually implicative relationship of subject and object
(viz. contextualized measurement and measured system).
The logical structure of quantum theory emphasizes the importance of the non-unitary, contextualized evolution of the density matrix via von Neumann’s Process 1. The effects of Process 1 are
specified by the orthodox von Neumann rules, but the process by which the measurement basis restricts the evolution and fixes a particular projection operator is not specified by any yet-known law or
rule. The principle of the causal closure of the physical is thus not validated by the known rules of contemporary quantum physics.
Von Neumann’s Process 1 was a crucial attempt to render explicit the non-classical, contextualizing effect of the measuring apparatus (‘subject’) upon the measured system (‘object’) in a way that at
once preserved the traditional meanings of ‘subject’ and ‘object’ and made possible their coherent application to quantum mechanics. By this framework, the objective state of a classically-conceived
‘measuring subject’ actively, contextually conditions the superposition of potential outcomes states constitutive of the classically conceived ‘object measured.’ Prior to the measurement outcome, the
potential objective qualifications of the ‘object-in-process’ always conform to ‘subjective’ contextualization via the measuring apparatus according to its orthonormal basis. The measured system is
thus best understood not as a ‘classical object’ but rather a quantum physical ‘object-in-process’—i.e., a history of contextual quantum ‘measurement relations’ or, more generally, ‘quantum praxes.’
Executive Summary
With the advent of quantum theory, the philosophical distinction between ‘what appears to be’ and ‘what is reasoned to be’ has once again, after several centuries of easy dismissal by classical
mechanistic materialism, become an important feature of physics. In recent well-regarded interpretations of quantum physics that give focus to the concept of 'quantum decoherence,' interpretations
including those proposed by Robert Griffiths, Roland Omnès, and Nobel laureate Murray Gell-Mann among several others, we have seen careful investigations into the physical (i.e., not ‘merely
philosophical’) distinction between the order of contingent causal relation and the order of necessary logical implication. Each of these interpretations, in its own way, makes an explicit appeal to
the logical order as somehow ‘physically efficacious’ in or ‘governing of’ processes such as quantum decoherence and non-local, EPR-type quantum mechanical measurement interactions, among many other
associated quantum processes.
The familiar classical conceptions of ‘subject,’ ‘object,’ ‘epistemology,’ and ‘ontology’ find no fully coherent mapping onto these recent advances in quantum physics, apart from their casual,
practical application. In the same way that the causal and logical orders are treated as mutually implicative in these modern interpretations of quantum physics, so too are the pairings of ‘subject
and object,’ ‘epistemology and ontology.’
Principal Investigator Michael Epperson has argued that a careful philosophical exploration of the function of the logical order in modern interpretations of quantum physics yields an ineluctable
re-casting of the classical, dualistic understandings of ‘subject-object,’ ‘epistemic-ontological’: The ‘subjective’ and ‘objective’ features of nature described by quantum physics are not best seen
as fundamental, complementary, mutually exclusive features of reality as suggested by Bohr; rather, they are more coherently understood as mutually implicative features of fundamental units of
relation (cf. Michael Epperson, Quantum Mechanics and the Philosophy of Alfred North Whitehead, Fordham University Press, 2004). The Aristotelian infimae species are not material quanta related as
‘subject’ and ‘object’; the infimae species are, rather, the quantum relation-events (praxes) themselves, and their logical ordering into serial quantum histories (cf. Robert Griffiths, “Consistent
Histories and the Interpretation of Quantum Mechanics.” Stat. Phys. 36: 219-272, and Robert Griffiths, Consistent Quantum Theory. Cambridge University Press, 2002).
Whereas quantum physics was initially lamented for the intrusion of ‘subjectivity’ into ‘objective’ physics, the explicit accounting of the role of the logical order in quantum physics reveals that
it is not fundamental subjectivity that is evinced, but rather fundamental relativity, whose ultimate physical units are quantum praxis-events and their serial, logically ordered histories.
Problematic phenomena such as EPR-like, nonlocal quantum causality can be more coherently accounted for in such a holistic framework, and not merely tolerated as an odd dispensation from classical,
local causal mechanics. But at the same time, classical, local mechanics are not merely dismissed or ‘explained away’ as epistemic artifacts of an underlying, fundamentally holistic quantum ontology,
for the universe is not ‘sheerly’ holistic by this interpretation of quantum physics, which further leaves space for both epistemological and ontological emergence.
Unlike other interpretations of quantum mechanics (the various local and non-local ‘hidden variables’ interpretations, for example), local causality is not here cast as an epistemic illusion
reducible to a fundamental ontological, sheerly nonlocal (holistic) causality; and neither is nonlocal causality cast in similar fashion, reducible to fundamental, merely local, classical causality.
Instead, the speculative philosophy of Relational Realism proposed herein, with its praxiological interpretation of the quantum theory, depicts the local and nonlocal features of these quantum
relation-events (quantum praxes) as mutually implicative features of reality. For example, causal relations that are locally restricted by their respective light cones and the constancy of the speed
of light are, nevertheless, governed by their associated logical relations, which are not locally restricted by c (consistent with Einstein’s special and general theories of relativity.) Thus,
nonlocal EPR-type experimental results can be properly understood not as ‘non-local causality at a distance’ via some superluminal transfer of energy in violation of classical mechanics (as given in
the various hidden variables interpretations, for example), but rather as causal quantum mechanical relations among spatially well-separated but logically ordered quantum praxes.
By this interpretation, the mechanics of physical causal relation given in the orthodox quantum theory is explicitly characterized as 'logically governed' by virtue of the logial presuppositions
inherent within the formalism. At the conceptual level, for example, one could point to the presupposition of the laws of Boolean logic within probability theory--at least in the particular way the
latter is employed in quantum theory. At the level of the physics, this is especially well-reflected in the concept of quantum decoherence, such that the latter can be understood as evincing a
physically significant effect of this logical governance--a logical 'conditioning' of physical causality. Beyond its pertinence to familiar problems in physics, such as temporal asymmetry in
thermodynamics, the thesis of logical causality explored by this research program has special relevance to philosophy as well, in that it might shed new light on the ages-old problem of correlating
the order of causal relation and the order of logical implication. In this regard, the program's study of logical causality in quantum physics entails a modern re-exploration of the relationship
between 'conceptual' and 'physical' first introduced by Plato and revisited continuously over the centuries since.
Within the past few decades, a confluence of key scientific and technological developments has set the stage for an historic rehabilitation of speculative, scientifically informed metaphysics; and it
is this rehabilitation that will most likely provide the most sure-footed bridge in any fruitful dialogue among science, philosophy, and theology. This new reintegration of metaphysics and modern
science, particularly as instantiated in modern interpretations of quantum physics, has led to promising new strategies for forging substantive advances in four primary areas: 1. Reductionism,
determinism, and causal closure; 2. The mind-body problem; 3. The ‘hard’ problem of consciousness; 4. The relationship between quantum indeterminacy and free will. Our proposed approach combines the
following multi-disciplinary features:
1. Leveraging key results of modern physics, especially quantum theory, to convert some key elements of past philosophical debates into substantive, testable hypotheses.
2. Explicitly incorporate key philosophical distinctions (such as causal-logical, epistemic-ontological, subject-object), which are often left merely implicit in the science.
3. Develop recommendations for rigorous experimental protocols that will yield clear, reproducible results.
By the speculative philosophical program of Relational Realism outlined herein, with its praxiological interpretation of quantum mechanics, the causal-logical, subject-object, and
epistemic-ontological 'dualisms' emblematic of conventional substance-based accounts (i.e., dualisms derived from fudamental material quanta) are recast as mutually implicative features of
fundamental quantum praxes or relation-interaction events. Our investigation will be unique in that we shall proceed both from the ‘bottom up’ science of fundamental physics, and from the ‘top down’
sciences of more complex systems. From the bottom up, our investigations of the fundamental physics will be grounded in the work of co-investigator David Finkelstein at Georgia Tech, who has proposed
new insights such as characterizing classical logic as merely a singular limit of quantum kinematics, and an emphasis of ‘actions’ (praxes) versus ‘objects’ in quantum physics.
From the top down, our investigation will include the work of co-investigator Henry Stapp, who has studied the ways in which large, complex systems such as the human mind exemplify these mutually
implicative concepts—subject/object, logical/causal, epistemic/ontological—and their underlying quantum physical formalism. Dr. Stapp’s theoretical work on ‘quantum neuroscience’ applies an effect of
well-tested standard quantum theory (the quantum Zeno effect) to obtain testable predictions about how the body can act in accord with the conscious intent of a human observer. Our goal with respect
to this aspect of the research will be to identify ways in which the metaphysical principles undergirding our non-materialistic, non-dualistic, ‘event-ontological’ (‘praxiological’) interpretation of
quantum physics might be exemplified in modern neuroscience and other important applications. (It is important to note here that a neuroscientific exemplification of the quantum theoretical framework
in no way implies a reduction of mind or consciousness to quantum mechanics.) Rather, as a metaphysical desideratum, it is expected that logical causality as the a priori foundation of a
‘reasonable,’ quantum mechanically describable universe will have its reflection in the functioning of the ‘reasoning’ human brain to the extent that it, too, is quantum mechanically describable. If
quantum physics can be shown to be demonstrably applicable to a neuroscientific description of the brain—even in a limited sense—then it could be argued that the key role of the logical order in our
approach to interpreting quantum mechanics will be reflected in the logic underlying human reason.
Building on his previous work on the philosophical implications of the quantum theory (Epperson 2004), the program of Relational Realism introduced herein is grounded upon the notion that a coherent
correlation, in the language of physics, of the causal and logical orders—one that includes a careful distinction between causal (i.e., efficient causal) efficacy and ‘logical efficacy’ or ‘logical
governance’—is of first importance in modern physics and in modern science and philosophy more generally. Such a correlation might ultimately reveal deeper levels of systematic distinctiveness among
the manifold competing modern interpretations of quantum physics—distinctiveness that is at once metaphysical and demonstrably physical. More broadly, the speculative philosophical program of
Relational Realism is intended to exemplify a novel two-way-rapprochement of physics and philosophy that could advance an understanding of nature in ways that transcend the conventional separation of
these two disciplines.
Perhaps most important, the speculative philosophical program of relational realism proposes a novel means of bridging the centuries-long proliferation of disconnected worldviews by which ‘science’
is variously interpreted as exemplifying mutually antithetical ‘ultimate’ meanings and cosmological implications—reductionism versus holism, objectivism versus subjectivism, cosmogonic creation
versus cosmological evolution, among many others. This proliferation has markedly impeded progress toward the evolution of scientifically informed worldviews capable of coherently accommodating
fundamental features of experience that lie beyond the restricted scope of scientific description.
Co-investigator Henry Stapp
Mindful Universe: Quantum Mechanics
and the Participating Observer Springer Verlag, Berlin, 2007 p.117
Logical Causality in Quantum Mechanics
The fall of Aristotelian physics and the rise of modern science sparked the ignition of two vital engines that have carried modern Western philosophy of nature to eminence in the 20th century.
The first engine, technological innovation, is often mistaken as the primary engine driving this ascent because its roar has, over the centuries, generated the most attention; but its
less-regarded twin engine of mechanistic-substance metaphysics has contributed equally to the overall momentum. Indeed, even a cursory look at the history of western philosophy of science reveals
that a major role for each engine has been to carry the weight of the other. And with occasional adjustments of balance and attunement over the past four centuries, mechanistic-materialism has
contributed steadily and surely to the rise of modern science and philosophy.
With the advent of quantum mechanics early in the 20th century, however, the engine of mechanistic-materialism has begun to sputter noticeably. In just the past few decades, classical philosophy
of nature, with its comfortably understood components of ‘ontology,’ ‘epistemology,’ ‘objectivity’ and ‘subjectivity,’ has fallen so far out of alignment with the engine of technological
innovation that the wobbling can no longer be ignored. Superconducting quantum interference devices, experimental demonstrations of quantum nonlocality, quantum transistor technology… These and
other innovations have not been so easily borne by mechanistic-materialism as were the classical technological innovations of centuries past.
Even so, the desire to keep classical mechanistic-materialism in service has easily trumped most efforts to modernize—perhaps because its coming of age back in the 17th century was such a
triumph; for beyond the technological innovations it helped to power, mechanistic materialism was the first proven bridging of a chasm that had dominated philosophy for over 2000 years—the
Platonic chasm separating description and explanation of phenomena—the wide gulf between what appears to be and what is reasoned to be. With classical science, a seemingly secure pathway from
appearance to truth, from contingency to necessity, had finally been discovered; for proof, one need look no further than the technological breakthroughs fostered by this science, and the
scientific breakthroughs that would, in turn, result from these new technologies.
David Hume was perhaps the last empiricist of the early modern period to warn that cyclical progress of this kind, no matter how exhilarating, was no proof of crossing the Platonic chasm—that
even the most careful empiricism could never demonstrate a real bridging of the orders of necessary logical implication and contingent causal relation. The mathematical and philosophical first
principles associated with the logical order could, Hume argued, never be found within fundamental, ultra-reductive physical descriptions of phenomena associated with the causal order. The
certainty of an objective logical conception such as ‘if p then q’ can neither be deduced from nor demonstrated by a subjective causal perception such as ‘p (seemingly) causes q.’ Logical
necessity can never be soundly derived from causal contingency, no matter how carefully, or to whatever reductive depth, the latter is measured.
Despite such admonitions from Hume and others to follow, the twin engines of modern technology and mechanistic-materialism pushed forward steadily and unimpeded well into the 20th century,
carrying with them what have become conventional, “modernist” conceptions of ‘subject,’ ‘object,’ ‘epistemology,’ and ‘ontology.’ But after several decades of attempts to map these concepts onto
quantum physics in a manner consistent with their familiar mapping onto classical physics, the old, steady trajectory of classical, reductive empiricism, borne by the heretofore well-proven
engines of technological innovation and mechanistic-materialism, has slowly begun to degenerate. The drive toward confident crossing of the Platonic chasm via the route of sheer
reductionism—unification of the sciences via scientism—has, since the advent of quantum mechanics, instead degenerated into a divergence of barely-paved ‘interpretations’ of quantum physics, with
no easy metric by which these ever-branching routes to truth might be evaluated. Indeed, ‘The Truth’ as science’s final goal is no longer the clear destination it once had seemed to be. The chasm
separating fundamental descriptions of nature and fundamental explanations of nature remains unbridged. The familiar classical conceptions of ‘subject,’ ‘object,’ ‘epistemology,’ and
‘ontology’—the conventionally accepted foundations for all reasonable attempts at construction—simply have not been able to bear the weight of the new physics.
The notorious ‘problem of measurement’ in quantum mechanics is perhaps the best exemplification of the difficulty in two key respects:
1. Quantum theoretical descriptions of the ‘measurement’ of ‘objects’ by ‘subjects’ always yields a linear superposition of possible objective states; yet the actual measurement interaction
always terminates with a unique ‘measured object’ in a definite state. Thus the logical principles of Non-Contradiction (PNC) and the Excluded Middle (PEM) are always satisfied in the
practice of quantum mechanics; yet they are not in any way accounted for by the quantum theory itself.
2) The classical separation of ‘subject’ and ‘object’ and the associated separation of ‘epistemology’ and ‘ontology’ are, at best, only clumsily applicable to quantum physics. ‘Objective’
states are not only subsequent to measurement by a ‘subject’ but consequent of such measurement, i.e., integrally part of measurement—a seeming intrusion of subjectivity into the classical
conception of physical objectivity. And worse still, the ‘subject’ appears to play some physical role in qualifying possible outcome determinations, since these possible ‘objective’
qualifications are always expressed in terms of subjectively derived qualifications of the measurement process.
Niels Bohr proclaimed on the first page of his 1934 book Atomic Theory and Human Knowledge that “The task of science is both to extend the range of our experience and reduce it to order.” His
point is that science needs to do more than merely codify what is already known; an important part of its task is also to expand the range of our experience.
The development of quantum theory is an example of such an expansion. In the prior classical theory, the sense-data portion of our experience had been dealt with by effectively replacing all such
‘subjective’ experiential realities by corresponding ‘objective’ properties, which were assumed to be completely described in terms of mathematical properties attached to points in space-time.
The theory conformed to the principle of ‘the causal closure of the physical,’ which asserted that at each moment t all physically defined future properties are completely determined by the laws
of physics in terms of physically defined properties of the past. Quantum theory extended that earlier theoretical structure by bringing the experiencing and acting observers explicitly into the
conceptual and causal scientific structure.
The omission from the earlier classical dynamics of any contribution from the experiential aspects of reality might reasonably have been seen from the start to be an expedient approximation
destined eventually to be removed. Yet advances that broke away from the earlier one-sided theory were pursued by the founders of quantum theory not on philosophical grounds but, more
importantly, in order to cope with the stubborn fact that the quantum generalization of the classical laws produced a putative ‘objective physical reality’ that was wildly out of synch with our
sense perceptions. For example, a superposition of ‘Schrödinger cats’—some alive, some dead, some in between—is not given to our experience. That ‘defect’ was rectified by introducing into the
dynamics what Bohr called “the free choice of experimental arrangement for which the mathematical structure of the quantum mechanical formalism offers the appropriate latitude” and what von
Neumann, in his book Mathematical Foundations of Quantum Mechanics, called “Process 1.”
According to the basic principles of quantum theory, all accessible knowledge pertaining to the input/preparation of a system S that is subject to our probing inquiries/actions is contained in a
mathematical structure ρ, called the (probability) density matrix (or operator). In von Neumann’s non-relativistic formulation this operator ρ evolves in time, and ρ(t) represents the state of
the system S at time t. The state S at time t is considered to exist over the subset of the set of space-time points (x’, y’, z’, t’) for which t’= t. The quantum dynamical law of evolution
asserts that for an isolated system S, ρ(t’) = exp –iH(t’-t) ρ(t) exp iH(t’-t), where H is the Hamiltonian operator, here assumed to be time independent. This dynamical law holds except at a
discrete set of times at which a Process 1 intervention occurs, and at a discrete set of times at which answers to the queries posed at the Process 1 interventions are delivered. Each probing
inquiry can be reduced to a set of ‘yes-no’ type questions. Each such question is associated with a pair of projection operators P and P’ = (1-P), and is represented mathematically by the Process
1 action ρ à ρ’ = PρP + P’ρP’. The probability that the delivered answer is ‘yes’ to the possible state PρP is Trace PρP/ Trace ρ. Tomonaga and Schwinger have generalized these von Neumann (vN)
rules to relativistic quantum field theory (RQFT). Then ρ(t) gets replaced by ρ(σ), where σ is a continuous three-dimensional subset of space-time, having the property that for any pair of points
p and p’ on σ the separation between p and p’ is space-like (Stapp, 2007).
The description of the logical structure of quantum theory given above emphasizes the importance in conventional orthodox quantum theory of Process 1 interventions. They are interventions from
outside the scope of the dynamics described by the known mathematically described physical processes. The effects of these ‘interventions’ (as described by von Neumann) are specified by the
orthodox vN rules, but the process that fixes the selected projection operator P is not specified by any yet-known law or rule. The principle of the causal closure of the physical is thus not
validated by the known rules of contemporary quantum physics.
Hence the question arises: How can we expand our thinking in a way that will convert these apparently causally effective interventions into elements of a logically coherent order? One of our
objectives is to bring the combined resources of contemporary philosophy, cognitive science, neuroscience, and physics more effectively to bear upon this question.
Von Neumann’s ‘Process 1’ was, in this spirit, a crucial attempt to render explicit the non-classical relationships between ‘subject’ and ‘object’ evinced by quantum mechanics in a way that both
preserved their traditional meanings, and yet made possible their coherent application to modern quantum theory. The classical conception of objective ‘qualification by quality’ was re-written by
Von Neumann into a model wherein the objective (quantum mechanical) facts constitutive of a classically-conceived ‘measuring subject’ actively conditioned the superposition of potential facts
constitutive of the classically conceived ‘object measured.’ The latter is thus best understood not as a ‘classical object’ but rather a quantum physical ‘object-in-process.’
Nevertheless, the classical conception of ‘subjective’ ‘secondary’ qualifications conforming to ‘objective’ ‘primary’ qualifications remains applicable in quantum mechanics—but only as regards
the observation or registration of the actual measurement outcome (i.e., the object-as-actual) by a subsequent subject. Prior to the actualization of this measurement outcome, the potential
objective qualifications of the ‘object-in-process’ must similarly conform to the (classically) ‘subjective’ qualifications of the measuring apparatus. Thus every classically conceived ‘object’
and ‘subject’ is, by the light of quantum mechanics, more fundamentally described as a chain of quantum ‘measurement interactions’ or, more generally, ‘quantum praxes.’
By this description, the classical dualistic separation of ‘subject’ and ‘object’ is rendered a conceptual abstraction—as is the correlate classical dualistic separation of ‘ontology’ and
‘epistemology.’ If the fundamental constituents of nature are more accurately describable as ‘quantum praxes,’ from which we might abstract the classical conception of ‘objects known by
subjects,’ we can similarly characterize the classical conceptions of ‘ontology’ and ‘epistemology’ as conceptual abstractions from a more fundamental quantum ‘praxiology.’
Epistemology, Ontology, and The New Physics: Quantum Praxiology
The term ‘praxiology’ (and its alternative spelling, ‘praxeology’) has traditionally referred to the study of human action, as rooted in the work of French philosopher and sociologist Alfred
Espinas (1844-1922). It has since evolved into a number of related philosophical and sociological applications. In 1923 in the Polish academy, Tadeusz Kotarbinski (1886-1981), developed
‘praxiology’ as the theory of deeds, practice, and efficient action. The term was later applied to theories of action in economics by several scholars, including Eugene Slutski (1926) and Ludwig
von Mises (1933) among others, as well to the study of moral philosophy and ethics (cf. Mario Bunge, McGill University.) Recent scholarship by the French and Polish academies, however, has traced
the origins of praxiology to the work of philosopher of science Louis Bourdeau, who coined the term ‘praxeologie,’ defining it as a ‘science of functions’ (Théorie des Sciences: Plan de Science
Integrale,Paris: Librairie Germer Baillière, 1882, v2).
The conception of ‘praxiology’ given in the work of Bourdeau and his early French contemporaries is quite different from that seen in the later, more specialized applications developed by
Espinas, Mises, Bunge, and others; the distinctiveness is so profound that today, there have developed two separate traditions of praxiology/praxeology, each adopting one of the two alternative
Praxeology refers to the human action tradition and its applications to economics and ethics, rooted in the works of Ludwig von Mises and subsequent scholarship.
Praxiology refers to the tradition whose roots go back to the origin of the term in the work of Bourdeau and the early French school. This is the tradition whose development continues with
the French and Polish academies (cf. Praxiology: The International Annual of Practical Philosophy and Methodology. New Brunswick, N.J.: Transaction Publishers, 1992-present. NB: vol. 7, The
Roots of Praxiology: French Action Theory from Bourdeau and Espinas to Present Days, V. Alexandre and W.W. Gasparski, eds., 2000.)
Our application of the term ‘praxiology’ to quantum physics, particularly with respect to the work of von Neumann and the implications described above, exemplifies several of Bourdeau’s key
conceptions of praxiology as ‘the science of functions.’ For example, Bourdeau writes:
We give the name of "function" (from the Latin fungor, ‘I perform’) to a series of effects which are accomplished in a certain form under the influence of actions of the environment…
Expressing the relationship between forms and the environment, it matches the condition of the former to those of the latter and relates each being to its habitat. Hence, it connects the
parts to the whole, subordinates each detail to the entirety and completes the knowledge.
…Taken as a whole, these functions makes up a truly unified category, despite the differences in their aspects. They are all caused by environmental actions modified in the forms and are
refracted in a series of effects. Science has not yet systematized their order and even lacks the proper term to name the general force that produces them.
Specific powers have of course been imagined to explain certain functions, such as life or vital forces and the soul or physical forces; but these imagined agents vested with partial
functions arouse serious objections as did the discredited agents of early Physics. It would be in accordance with the methods of science to attribute all the functions to a single force of
the same order as gravity, physical action and affinity. We shall give it the broadest name of ‘the force of activity.’ (The Roots of Praxiology: French Action Theory from Bourdeau and
Espinas to Present Days, V. Alexandre and W.W. Gasparski, eds., 2000, p.21-23)
When Bourdeau posits that diverse functions in nature are comprised by “a truly unified category, despite the differences in their aspects,” and that “they are all caused by environmental actions
modified in the forms and are refracted in a series of effects,” one finds these notions echoed loudly in the works of many of the best-regarded quantum theorists of our time, including Nobel
laureate Murray Gell-Mann, Roland Omnès, Robert Griffiths, and others. These physicists, despite their individual differences of approach, have each posited conceptions of quantum physics that
aim at the very same unification described by Bourdeau, including the function of the environment in that unification. These thinkers each attempt to unify quantum and classical descriptions of
nature, for example, and stress the function of the environment in quantum measurement interactions. In addition, they all begin with von Neumann’s approach toward ‘quantum measurement,’ defined
above as ‘quantum praxis,’ and similarly derive the classical notions of ‘subject’ and ‘object’ from this definition.
Just as important to both the praxiology of Bourdeau and these modern interpretations of quantum physics, however, is the physical distinction between the order of causal relation and the order
of logical implication. Bourdeau writes:
The work of function is quite specific and must not be mistaken, as is sometimes the case, for that of modality or composition. A function is characterized by the order of its developments
owing to the unity of direction [i.e., an asymmetrical logical order] which the structure imposes upon concurrent forces. (Ibid.)
And indeed, temporal and logical asymmetry must be a part of any coherent ontological interpretation of quantum mechanics. One finds, for example, a close connection between Bourdeau’s quotation
above and Heisenberg’s insistence that “every act of observation is by its very nature an irreversible process; it is only through such irreversible processes that the formalism of quantum theory
can be consistently connected with actual events in space and time” (Heisenberg, 1958, 52). And similarly, the research programs of Gell-Mann et al. make explicit appeals to the logical order as
somehow ‘physically efficacious’ in or ‘governing of’ processes such as decoherence and non-local, EPR-type quantum mechanical measurement interactions (cf. Roland Omnès. The Interpretation of
Quantum Mechanics. Princeton, N.J.: Princeton University Press, 1994, and Robert Griffiths. Consistent Quantum Theory. Cambridge: Cambridge University Press, 2002 as examples.)
Therefore, a coherent correlation, in the language of physics, of the order of causal relation and the order of logical implication—one that includes a careful distinction between causal (i.e.,
efficient causal) efficacy and ‘logical efficacy’ or ‘logical governance’—is of first importance in the proposed investigation of quantum praxis.
Bridging Logic to Causality in Quantum Mechanics
The relation between the logical and causal orders is illustrated most concretely in the simplest model, Peano’s. This is based on a successor operation ι converting any integer “moment” to the
next, n to ι n = n + 1. Peano later generalized ι to the unit-set-generating operation ι in his set theory, converting any set s of any cardinality into the unit set ι s = {s}; we call this
unition. The Peano causal order ι presupposes a logical order represented by the inclusion relation
holding between any set
regarded as defining a predicate or class of integers, and any subset s’of s (Finkelstein, 2002).
The quantum theory of Bohr and Heisenberg was the first physical theory to transcend the mechanical notion of absolute truth implicit in mathematics; Heisenberg called it ‘non-objective.’ Like an
integer, physical systems seem to have maximal descriptions. Unlike an integer, these are incomplete; every predicate has complementary ones. Bohr and Von Neumann respectively renounced and
revised classical logic in formulating quantum theory. We see this as renouncing ontology (theory of being) for a praxiology (theory of acting); this is not an ‘interpretation’ of quantum theory
but rather a ‘paraphrase.’ Since all actual observations make changes in the system beyond our control, we do not assume that a system has an absolute ontology, except as a singular limit
but only an absolute praxiology, a network of quantum processes represented by an operator algebra associated with the system.
Most quantum theory to date has retained the classical theory of the causal order and an absolute space-time, projecting these pre-quantum concepts into the quantum microcosm in what has long
been recognized as unphysical and probably a failure of the theorist's imagination. Such mixed quantum/classical field theories are structurally unstable and singular as well as false to actual
practice. They challenge us to reconstruct the physical theory of the causal order based on explicit linkages with the logical order.
“...While it is impossible to fit quantum theory into classical understanding, it is possible to understand it on its own praxic terms.”
Co-Investigator David R. Finkelstein
Quantum Relativity: A Synthesis of
the Ideas of Einstein and Heisenberg
Springer Verlag, Berlin, 2002, p.35
Just as classical unition ι (taken with union) generates classical set theory, a quantum unition operator ι generates a quantum set theory rich enough for field theory. Like classical set
theory, this quantum set theory comes with no manual for building a physical theory with its tools. This is provided by a correspondence principle generalizing Bohr’s, which is implicit in a
suggestion by Irving Segal: Present-day singular physical theories are singular limits of a regular physical theory. Slight errors in the commutation relations have converted simple Lie
algebras into nearby non-simple ones.
This suggests that a suitable Lie algebraic (i.e. the algebra characterizing a physical theory’s transformations, as idealized in an infinitesimal limit) simplification can restore the
hypothetical regular theory (i.e., a theory devoid of singularities which are denotive of a theory’s failure to give definite information.) This is an extension of canonical quantization; one
may call it simplification quantization. One way to implement it is to move physical theories from their singular foundations in classical set theory onto the regular foundations of quantum
set theory, which provides the necessary variety of simple Lie algebras. ‘Simplification quantization’ regularizes singularities that have eluded canonical quantization, while maintaining
agreement with experiment. Simplification quantizations of gauge theory in general and of the gravitational theory of the causal order in particular are underway by co-investigators David
Ritz Finkelstein and Mohsen Shiri. (See Appendix, “Transcendence of Physical Theories”)
This hypothesis provides an origin for the important Lie algebras of quantum physics, including the Lorentz, Heisenberg, Poincaré, and unitary ones. The basic Lie algebras define the
statistics of quantum aggregates. These then generate kinematical algebras—i.e., algebras characterizing all possible dynamical outcomes of a quantum system, as underwritten by the theory.
Finally, the operators in kinematical algebras that are symmetries of organized modes like condensates make up the symmetry groups and Lie algebras. In this approach, there are no truly
fundamental symmetries in nature. Empirical symmetries tell us about the symmetry of some organized substratum and are contingent upon that organization. Space-time curvature and classical
gravity can now be regarded as residual effects of the quantum non-commutativity of a simple space-time-energy-momentum Lie algebra near the singular limit of classical space-time.
These and other exemplifications of logical causality in fundamental physics might ultimately reveal deeper levels of systematic distinctiveness among recent interpretations of quantum
mechanics; and at the same time, they might point to a broad unification of the sort proposed by Bourdeau—a unification that is at once metaphysical and demonstrably physical. In this
context, the speculative philosophical program of Relational Realism will aim at exemplifying a novel two-way-rapprochement of physics and philosophy that could advance an understanding of
nature in ways that transcend the conventional separation of these two disciplines.
“The inability of quantum mechanics to account for the actualization of potentia or the temporally asymmetrical relations which obtain from such actualizations,is not problematic given
that quantum mechanics presupposes and anticipates the existence of facts; this is evinced in the concepts of state evolution, probability, and history. ‘One may consider,’ writes
physicist Roland Omnès, ‘that the inability of the quantum theory to offer an explanation, a mechanism, or a cause for actualization is in some sense a mark of its achievement. This is
because it would otherwise reduce reality to bare mathematics and would correspondingly suppress the existence of time.’”
Principal Investigator Michael Epperson
Quantum Mechanics and the Philosophy of
Alfred North Whitehead
Fordham University Press, New York, 2004, p.94-95
Just as Bourdeau’s definition of praxiology found its easy evolution into the more specialized applications of ‘praxeology’ seen in Espinas, Mises, Bunge, et al., we anticipate a similar
cross-disciplinary application of quantum praxiology into several diverse areas of scientific inquiry, including neurophysiology and the ‘hard’ problem of consciousness.
Quantum Praxiology: Additional Implications and Exemplifications
The study of the logical order inherent in quantum theory has recently been advanced by Robert Griffiths in his introduction of the concept of (logically) consistent histories. Gell-Mann
and Hartle, in an influential paper, have used this concept as a foundational part of their attempt to understand the origin, in a fundamentally quantum world, of the essentially
‘classical’ character of human experience. Gell-Mann and Hartle use Griffiths’ idea, and its developments by Roland Omnès, in conjunction with a major reinterpretation of Everett’s
Many-Worlds proposal. Whereas Everett’s ‘many worlds’ are typically interpreted as equally real, ‘co-actual’ alternative universes, Gell-Mann and Hartle recast these ‘many worlds’ as
‘many alternative histories of the universe’:
…The many worlds are all described as being ‘all equally real,’ whereas we believe it is less confusing to speak of ‘many [alternative] histories, all treated alike by the theory
except for their different probabilities.’ To use the language we recommend is to address the familiar notion that a given system can have different possible histories, each with its
own probability; it is not necessary to become queasy trying to conceive of many ‘parallel universes,’ all equally real. (Gell-Mann, 1994, p.138)
Likewise, Griffiths in his book on consistent histories never mentions the ‘many-worlds’ idea, and Omnès is, in many places, explicitly contemptuous of the idea. Omnès also stresses a
major deficiency of the Consistent Histories approach when considered as a full foundational structure: It deals with logically consistent ‘possibilities,’ but can give no accounting or
explanation for the emergence or existence of actual facts. At the same time, however, Omnès writes that this ‘deficiency’ might be seen as “a mark of achievement” when properly
understood as a necessary limitation of quantum mechanics (Omnès, 1994, 494)—a boundary that ends at the bridge between the logical and causal orders. Crossing, for Omnès, with a nod back
to Hume, cannot be accounted for by the physics alone. If quantum mechanics alone could account for the existence of facts, over and above merely providing a fundamental description of
them, it would amount to a brute force assimilation of the causal order to the logical order—an unappealing and unwarranted reduction of “reality to bare mathematics.” (Ibid.)
Gell-Mann and Hartle approach this problem of the generation of actual facts by introducing the concept of an IGUS, an Information Gathering and Utilizing System, the paradigmatic example
of which is a human being. The general characteristics of such ‘complex adaptive systems’ is the subject of much ongoing research by Gell-Mann, Hartle, and their colleagues at the Santa
Fe Institute. Our program can be viewed as an effort, from a different direction, to bring IGUSes, and associated facts, into quantum theory in a way that explicitly links logical order
to causal order. This goal—in line with the admonitions of Hume, revitalized by modern theorists like Omnès—will not be to fashion quantum mechanics into a fundamental explanation of
actual facts in the sense of fully accounting for their existence. We propose, instead, to explore how quantum mechanics might provide a coherent and empirically adequate fundamental
description of actualities as quantum actualization events or ‘quantum praxes.’
Empirical evidence to support the sort of modeling for IGUSes that we intend to develop depends of course on the detailed model or models proposed. But the general characteristic will be
macroscopic quantum effects in biological systems that appear to be beyond the capacity of classical-physics-based systems. A possible first example of such an effect may be in the
harvesting of radiant energy by photosynthetic systems, introduced in the previous section of this proposal (p.13). The recent letter of Engel et. al. (2007), published in Nature, gives
empirical evidence that photosynthesis uses a macroscopic quantum effect akin to Grover’s algorithm, a strictly quantum effect. If this basic biological process uses a macroscopic quantum
process then it is plausible that macroscopic quantum effects will be used in other ways by biological systems. We plan to enlist the aid of biophysicists and neurophysiologists in our
search for such effects.
One of the models that we intend to scrutinize is the one proposed by co-investigator Henry Stapp, in collaboration with psychiatrist J. Schwartz, and neuroscientist M. Beauregard. Their
proposal involves specific features such as cortical “Templates for Action” and a harnessing of the quantum Zeno effect. Those authors have cited significant evidence in their article in
the Proceedings of the Royal Society. We shall endeavor, with the aid of neuroscience consultants, to identify in the recent literature other evidence, pro or con, and identify or propose
more definitive experiments. One of these, currently underway, is Efstratios Manousakis’ experimental work on quantum mechanics and binocular rivalry.
Summary and Outlook
Given the current state of quantum theory as an arena of competing interpretations, the philosophical basis of any metaphysical preference—be it a dipolar praxiological scheme such as the
one proposed by the philosophy of Relational Realism, a classical mechanical-material scheme, a positivist scheme, or any other—would beg as robust an exploration as the physical basis,
at least insofar as ‘philosophy’ can serve as a valued conversation partner in such explorations. That is, if a physical theory can summarily trump a metaphysical theory via the
desideratum of empirical adequacy, then the desideratum of logical coherence and consistency should similarly empower metaphysics, such that a metaphysical argument could entail a
significant critique of some particular interpretation of quantum mechanics. If, for example, one presupposes the neo-classical dualism of actuality and potentiality as foundational to
the interpretation of quantum physics, one might further wonder: Can the advantages, suggested by Heisenberg, of understanding actuality and potentiality (or the causal and logical
orders) as connected yet mutually exclusive features of reality, be preserved within a more coherent monistic/quantum praxiological scheme, such as the one given in the speculative
philosophical program of Relational Realism outlined herein? For by such a scheme, actuality and potentia, causal relation and logical implication, are not conceived of as mutually
exclusive (bipolar) features of reality, but rather mutually implicative (dipolar) features of fundamental, unified, quantum praxis events.The crucial question then becomes: Can such
quantum praxis events be described as the Aristotelian infimae species—the elusive ‘final real thing’?
This is the interpretation of quantum physics suggested by Alfred North Whitehead, who developed his event-ontological metaphysics during the same years that Heisenberg, Bohr, and their
colleagues were developing the quantum formalism. Recent work has proposed a close compatibility between Whitehead’s metaphysical scheme and modern interpretations of quantum mechanics
(Frank Hattich, Quantum Processes: A Whiteheadian Interpretation of Quantum Field Theory, Agenda Verlag 2004; Michael Epperson, Quantum Mechanics and the Philosophy of Alfred North
Whitehead, Fordham University Press 2004).
It might be argued that ontological dualisms such as the one proposed by Heisenberg (and the associated epistemic dualism proposed by Bohr) provide for ‘cleaner’ accommodations of the
physics—such that, for example, causal relation in physics enjoys its status as a ‘concrete,’ ontological, ‘physical’ reality, and logical implication is restricted to an ‘abstract’
epistemic ‘conceptual’ reality. But it can also be said that a coherent dipolar monistic metaphysical scheme such as our quantum praxiological scheme, and the closely associated scheme
proposed by Whitehead, each with its close correlation of ontological and logical first principles, is ‘cleaner’ than any such dualistic scheme (see, for example, co-investigator Timothy
E. Eastman’s “Dualities without Dualism” in Physics and Whitehead: Quantum, Process, and Experience, SUNY Press 2004). The dipolar, monistic, quantum praxiological scheme may appear more
complex insofar as it lacks any sharp speciation of reality into actuality and potentiality, concrete and abstract, physical and conceptual, causal and logical, as fundamentally mutually
exclusive features of reality. But our praxiological scheme provides a coherent complexity, such that actuality and potentiality are seen as mutually implicative features of reality, as
are the physical and conceptual features, and the causal and the logical features.
In this regard, it can be argued that a dipolar, monistic, quantum praxiological scheme such as that given by the philosophy of Relational Realism proposed herein is ‘cleaner’ than any
simply dualistic scheme that might be mostly coherent but nevertheless requires at least a few fundamental features that are mutually exclusive rather than mutually implicative. Our goal,
as was Whitehead’s, is a physical and metaphysical scheme entirely free of such selective dispensations from coherence—especially those that would amount to foundational, ontological
inconsistencies. Information-based attempts to interpret quantum theory, for example, can be viewed as reflective of Whitehead’s dipolar event ontology, where ‘physical’ and ‘conceptual’
features of actuality are mutually implicative: ‘information,’ after all, is instantiated both physically and conceptually. Its formal structure has been precisely characterized by
Shannon and von Neumann in terms of physical notions like entropy; and yet its content is nevertheless representational, i.e. irreducibly conceptual insofar as fundamentally exhibiting
‘aboutness’—that is to say, information is always information about.
If, indeed, a coherent praxiological, relational realist interpretation of quantum physics finds its way to fruition and is seen as exemplifying a Whiteheadian type (or any other type) of
metaphysical scheme, the desideratum of empirical adequacy will be of paramount importance. The metaphysics must fit the empirically validated features of the physical formalism.
Interpretations of quantum physics such as those offered by Gell-Mann, Griffiths, and Omnès, for example, derive the logical order of classical causality, in part, from the decoherence
effect, whereby potential facts constitutive of a quantum mechanical system are logically ordered into potential, mutually consistent histories. Decoherence is thus given by these
interpretations as a derivation of classical logical causality from the quantum mechanical correlation of the causal and logical orders. And indeed, there have been experiments by which
the logical order of classical causality can be seen as deriving from the logical integrations of potentia yielded by decoherence. Caldeira and Leggett (1983a Physica A121, 587) appear to
have created a successful demonstration of such a derivation. Using the classical Lorentz oscillator model, they showed that the quantum interferences manifest by the oscillations were
cancelled out via the decoherence effect. The latter, in other words, can be seen as introducing logical constraints upon the quantum system. Given sufficient time for decoherence to
occur, the system becomes describable as a classical probability distribution in phase space. Moreover, observable consequences of co-investigator David Finkelstein’s theory (as
characterized, for instance, in Finkelstein et al., [2001] “Clifford Algebra as Quantum Language,” J. Math. Phys 42, 1489-1502) are discussed in the model of the oscillator in Finkelstein
and Shiri-Garakani, 2004c Finite Quantum Harmonic Oscillator. (https://www.physics.gatech.edu/people/faculty/finkelstein/FHO0410082.pdf)
But if fitness is to be tested and evaluated among competing physical-metaphysical interpretations of quantum mechanics—i.e., those that dualistically treat actuality and the causal order
and potentiality and the logical order as separate or separable features of reality, versus the relational realist interpretation, which treats actuality and potentiality as dipolar,
mutually implicative features of every quantum praxis event—it is equally crucial that the conception of experiment be sufficiently free of serious constraints imposed by any particular
ontological commitment. This is especially important with respect to certain of these commitments that enjoy the status of ‘convention.’ Thus an emphasis on experimental testing, such as
those discussed above, combined with reduced model-dependence, and a turn away from the typical conditioning influence of traditionally inherited ontological presuppositions, will be an
important prescription for the development of metaphysically coherent interpretations of physical theories such as quantum physics. The EPR experiment, for example, was conceived by its
authors via the conventional, inherited classical mechanistic-materialistic ontology. But more recent EPR-like tests of quantum nonlocality, rather than being conceived as constricted to
this ontology, were conceived to test the limitations of this ontology.
In Conclusion
Our hypothesis is that any conception of ultimate reality that it is in any way fundamentally describable by physics must presuppose the order of logical implication as a necessary first
principle. Our investigations into quantum praxiology will build upon several modern interpretations of quantum physics that have begun to explore, in very small steps, the metaphysical
notion of an explicit correlation of the order of efficient causal relation and the order of logical implication as physically, and not merely conceptually, significant.
By ‘physically significant,’ we mean that the explicit correlation of the causal and logical orders is taken as useful to the solution of several notorious conceptual difficulties in
quantum physics: Among these, the correlation of quantum theory and relativity theory, quantum non-locality, the measurement problem, and others. But as ‘philosophically significant,’ our
program in quantum praxiology will exemplify an underlying ontology which grounds and makes possible the realities attended to by the natural sciences and the humanities.
The speculative philosophical features of our research will be examined to the extent that they are exemplified by the physics; but this is merely the starting point of our work. Once
explored carefully in this restricted arena, the metaphysical conceptions of potentiality and actuality, and the correlation of the logical and causal orders as evinced quantum
mechanically can then find their application to other domains where comparable tests of empirical adequacy are not possible.
“Any bridge intended to span the chasm separating classical and quantum mechanics must be constructed upon a sound ontological framework that is:
1. coherent, in that its most fundamental concepts are incapable of abstraction from each other and thus free from self-contradiction; 2. logical; 3. empirically applicable; 4.
empirically adequate, in that the ontology is applicable universally—both to the realm of familiar experience as well as that of theoretical experience.”
Michael Epperson
Quantum Mechanics and the Philosophy of Alfred North Whitehead, p.6
Indeed, a physically substantiated metaphysical argument such asthat given in the philosophy of Relational Realism and its praxiological approach to quantum mechanics, briefly outlined
herein, might thus find its way into the sciences as an important new metric for theory evaluation. Such a metric, we propose, may methodologically complement the formal and algebraic
‘Segal Doctrine,’ which aims to algebraically characterize fundamental physical theories as expansions into stable and simple groups (see, for example, Baugh, Finkelstein, Galiautdinov, &
Shiri-Garakani, [2003] “Transquantum Dynamics,” Foundations of Physics, vol 33, n. 9, 1267-1275). So long as the speculative metaphysical scheme includes empirical adequacy and logical
coherence as key desiderata, it is difficult to argue against the possibility of such a metric becoming a non-trivial feature of the scientific enterprise; for both science and philosophy
presuppose the same logical first principles, without which neither would be possible.
Engel, G., Clahoun, T., Read, E., Ahn T.-K., Mancal, T., Cheng, Y.-C., Blankenship, R., & Fleming, G. (2007). “Evidence for wavelike energy transfer through quantum coherence in
photosynthetic systems,” Nature, vol. 446 (April 12), 782-786.
Epperson, Michael (2004) Quantum Mechanics and the Philosophy of Alfred North Whitehead, New York: Fordham University Press
Finkelstein, David (2002). Quantum Relativity: A Synthesis of the Ideas of Einstein and Heisenberg, Berlin: Springer-Verlag
Green, H. S. (2000) Information Theory and Quantum Physics: Physical Foundations for Understanding the Conscious Process, Berlin: Springer-Verlag.
Gell-Mann, M (1994) The Quark and the Jaguar: Adventures in the Simple and the Complex, New York, WH Freeman.
Heisenberg, W (1958) Physics and Philosophy, New York, Harper Torchbooks.
Pachoud, Bernard (1999). “The Teleological Dimension of Perceptual and Motor Intentionality,” in Petitot, et. al., eds., 196-219
Petitot, J., Varela, F., Pachoud, B., Roy, J.-M., eds. (1999) Naturalizing Phenomenology: Issues in Contemporary Phenomenology and Cognitive Science. Stanford: Stanford University Press.
Stapp, Henry (1993). Mind, Matter, and Quantum Mechanics. Berlin-Springer-Verlag.
Stapp, Henry (2007). The Mindful Universe. Berlin: Springer-Verlag.
|
{"url":"https://www.csus.edu/cpns/research_lcqm.html","timestamp":"2024-11-02T17:27:54Z","content_type":"text/html","content_length":"87173","record_id":"<urn:uuid:dd8bd143-2dee-49bf-a89d-e47c62af4624>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00297.warc.gz"}
|
Level2: The first fully visual no-code Quant trading strategy creation platform. Used by 50,000+ traders.
How to use Sharpe and Sortino ratios to optimize your trading strategy
A brief on how to utilize Sharpe and Sortino ratios to optimize a trading strategy
Published on Thu Mar 14 2024
Understanding Risk-Adjusted Returns:
When evaluating a trading strategy, focusing solely on returns isn't enough. A good strategy balances potential returns with the inherent risk involved. Two key metrics, the Sharpe Ratio and Sortino
Ratio, help traders assess this risk-adjusted performance.
The Sharpe Ratio: Gauging Risk-Adjusted Returns
The Sharpe Ratio is a widely used metric that analyzes a strategy's excess return (return above the risk-free rate) relative to its volatility (total standard deviation of returns). A higher Sharpe
Ratio indicates better risk-adjusted performance. Here's a breakdown of common Sharpe Ratio interpretations:
• Less than 1: Low risk-adjusted return, potentially indicating a risky strategy with low reward.
• 1 – 1.99: Adequate/good risk-adjusted return, suggesting potential for further optimization.
• 2 – 2.99: Very good risk-adjusted return, signifying a strategy that delivers strong returns relative to risk.
• Greater than 3: Excellent risk-adjusted return, indicating a potentially high-performing strategy with good risk management.
The Sortino Ratio: Focusing on Downside Risk
The Sortino Ratio addresses a limitation of the Sharpe Ratio. While the Sharpe Ratio considers all deviations (both upward and downward) from the average return, the Sortino Ratio focuses solely on
downside volatility. This is because only downward deviations represent actual losses. By focusing on downside risk, the Sortino Ratio provides a potentially more accurate picture of a strategy's
risk-adjusted return, especially for strategies aiming for consistent returns.
Optimizing Your Strategy with Sharpe and Sortino Ratios
By analyzing both the Sharpe and Sortino Ratios, you gain valuable insights for optimizing your trading strategy. Here's how:
• Compare to Benchmarks: Compare your strategy's Sharpe and Sortino Ratios to relevant benchmarks. If your ratios exceed the benchmark, your strategy offers potentially better risk-adjusted
• Evaluate Risk Management: Look for ways to reduce downside volatility without sacrificing potential returns. This could involve tightening stop-loss orders, diversifying your portfolio, or
adjusting position sizing.
• Identify Improvement Opportunities: Analyze the factors influencing your strategy's risk-adjusted returns. Consider strategies to enhance returns while managing downside risk effectively.
Remember: Risk tolerance is individual. While higher Sharpe and Sortino Ratios often indicate greater potential, consider your comfort level with risk when interpreting these metrics.
The Sharpe and Sortino Ratios are powerful tools for evaluating and optimizing your trading strategies. By understanding these metrics and their limitations, you can make informed decisions to
achieve your investment goals while effectively managing risk.
About Level2
Level2 is the first fully visual no-code automated strategy creation tool built for retail traders. Visit https://trylevel2.com for more information.
|
{"url":"https://blog.trylevel2.com/posts/how-to-use-sortino-and-sharpe-to-optimize-strategy","timestamp":"2024-11-05T02:41:48Z","content_type":"text/html","content_length":"30913","record_id":"<urn:uuid:c264de99-3f99-4106-8f70-559d790d006e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00788.warc.gz"}
|
Introduction to Algebra
Richard Rusczyk
Paperback (2nd edition)
Text: 656 pages. Solutions: 312 pages.
A thorough introduction for students in grades 6-9 to algebra topics such as linear equations, ratios, quadratic equations, special factorizations, complex numbers, graphing linear and quadratic
equations, linear and quadratic inequalities, functions, polynomials, exponents and logarithms, absolute value, sequences and series, and more!
Learn the basics of algebra from former USA Mathematical Olympiad winner and Art of Problem Solving founder Richard Rusczyk. Topics covered in the book include linear equations, ratios, quadratic
equations, special factorizations, complex numbers, graphing linear and quadratic equations, linear and quadratic inequalities, functions, polynomials, exponents and logarithms, absolute value,
sequences and series, and much more!
The text is structured to inspire the reader to explore and develop new ideas. Each section starts with problems, giving the student a chance to solve them without help before proceeding. The text
then includes solutions to these problems, through which algebraic techniques are taught. Important facts and powerful problem solving approaches are highlighted throughout the text. In addition to
the instructional material, the book contains well over 1000 problems. The solutions manual contains full solutions to all of the problems, not just answers.
This book can serve as a complete Algebra I course, and also includes many concepts covered in Algebra II. Middle school students preparing for MATHCOUNTS, high school students preparing for the AMC,
and other students seeking to master the fundamentals of algebra will find this book an instrumental part of their mathematics libraries.
Text ISBN: 978-1-934124-14-7
Solutions ISBN: 978-1-934124-15-4
|
{"url":"https://baimingacademy.com/product/introduction-to-algebra/","timestamp":"2024-11-06T08:12:07Z","content_type":"text/html","content_length":"93073","record_id":"<urn:uuid:d576b720-656e-48e4-a628-31de4f9c3123>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00185.warc.gz"}
|
Experimental Probability - Coach Carvalhal
Experimental Probability
Experimental probability
The empirical probability, relative frequency, or experimental probability of an event is the ratio of the number of outcomes in which a specified event occurs to the total number of trials, not in a
theoretical sample space but in an actual experiment.
Related Posts
|
{"url":"https://coachcarvalhal.com/experimental-probability-176/","timestamp":"2024-11-05T09:18:28Z","content_type":"text/html","content_length":"125760","record_id":"<urn:uuid:79e4e467-3471-46db-8f08-d673486b5817>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00288.warc.gz"}
|
The Speed Of Light In Scientific Notation: A Closer Look - Jamie Foster Science
The Speed Of Light In Scientific Notation: A Closer Look
The speed of light, commonly represented by the letter c, is one of the most fundamental constants in physics. But have you ever wondered what this speed looks like expressed in scientific notation?
Understanding how to write and interpret the speed of light in scientific notation provides insight into this lightning-fast universal speed limit.
If you’re short on time, here’s the quick answer: The speed of light in scientific notation is c = 3.00 x 108 m/s. This means the speed of light equals 3 times 108, or 300,000,000, meters per second.
In this article, we’ll explore the speed of light in scientific notation in detail. We’ll look at the value of c and how it’s measured, how to express c scientifically, and some examples of using the
speed of light in equations and problems.
With this comprehensive guide, you’ll gain a solid understanding of one of the most famous scientific constants.
The Speed of Light Constant
Defining the speed of light c
The speed of light, denoted by the symbol c, is a fundamental constant in physics that represents the maximum speed at which information or matter can travel in the universe. It is an essential
quantity in various scientific disciplines, including astronomy, optics, and relativity.
In scientific notation, the speed of light is approximately 299,792,458 meters per second (2.998 x 10^8 m/s). This means that light can travel a staggering distance of about 9.461 trillion kilometers
in a year!
How c was first measured accurately
The accurate measurement of the speed of light was a significant scientific achievement that took many years and ingenious experiments. One of the first successful attempts was made by Danish
astronomer Ole Rømer in the late 17th century.
Rømer observed the moons of Jupiter and noticed that their apparent position shifted depending on the Earth’s distance from Jupiter. By carefully analyzing these observations, he deduced that light
takes time to travel, and therefore, the speed of light is finite.
Later, in the 19th century, the French physicist Hippolyte Fizeau devised an experiment using a rapidly rotating toothed wheel and a beam of light to measure the speed of light. He successfully
measured the speed to be approximately 313,000 kilometers per second, which was impressively close to the modern value.
Why c is considered a universal constant
The speed of light constant, c, is considered a universal constant because it has the same value in all inertial reference frames, regardless of the motion of the source or observer. This principle
is one of the fundamental tenets of Einstein’s theory of relativity.
It means that the speed of light is a fundamental limit that cannot be exceeded by any object or signal in the universe.
The constancy of the speed of light is crucial for our understanding of the universe. It allows us to make precise calculations and predictions in various areas of physics, such as the behavior of
particles, the bending of light in gravitational fields, and the concept of spacetime.
It forms the foundation for many scientific theories and has been extensively verified through experiments and observations.
For more information on the speed of light and its significance in physics, you can visit NASA’s official website.
Writing the Speed of Light in Scientific Notation
Scientific notation is a way to represent numbers that are extremely large or small in a concise and convenient manner. It is commonly used in scientific and mathematical calculations, making it an
essential tool for scientists and researchers.
When it comes to expressing the speed of light, scientific notation is particularly useful due to its immense value. Let’s take a closer look at how to write the speed of light in scientific
Basic structure of scientific notation
Scientific notation consists of two main components: the mantissa and the exponent. The mantissa represents the significant digits of the number, while the exponent indicates the power of 10 by which
the mantissa is multiplied.
For example, the number 3,000,000 can be written in scientific notation as 3 x 10^6. In this case, the mantissa is 3 and the exponent is 6.
Converting c to standard notation
The speed of light, denoted by the symbol c, is approximately 299,792,458 meters per second. To represent this value in scientific notation, we can write it as 2.99792458 x 10^8 m/s. Here, the
mantissa is 2.99792458 and the exponent is 8.
By using scientific notation, we can easily handle calculations involving the speed of light without dealing with a long string of digits.
Reasons to use scientific notation for c
There are several reasons why scientists choose to write the speed of light in scientific notation:
1. Convenience: The speed of light is an incredibly large number, and writing it in standard notation can be cumbersome and prone to errors. Scientific notation allows for a more concise and
manageable representation.
2. Compatibility with calculations: Scientific notation simplifies complex calculations involving the speed of light. By converting the value of c into scientific notation, scientists can easily
perform mathematical operations such as multiplication, division, and exponentiation.
3. Comparison with other values: When comparing the speed of light with other quantities, such as the speed of sound or the velocity of objects in space, scientific notation provides a clear and
straightforward way to make meaningful comparisons.
Using the Speed of Light in Equations
One of the most fundamental constants in physics is the speed of light, denoted by the symbol ‘c’. It plays a crucial role in a wide range of scientific calculations and equations. Understanding how
to incorporate the speed of light into these equations is essential for accurately modeling the behavior of light and other electromagnetic phenomena.
Inserting c in algebraic formulas
When working with algebraic formulas that involve the speed of light, it is important to remember that ‘c’ represents a constant value. For example, in the equation E=mc², ‘c’ is squared and
multiplied by the mass ‘m’ to calculate the energy ‘E’.
This equation, famously derived by Albert Einstein, demonstrates the equivalence of mass and energy.
Similarly, in Maxwell’s equations, which describe the behavior of electromagnetic fields, ‘c’ appears as the speed at which electromagnetic waves propagate through space. By inserting ‘c’ into these
equations, scientists can accurately predict and analyze various electromagnetic phenomena, from the propagation of light to the behavior of radio waves.
Calculating relativistic effects
The speed of light also plays a crucial role in the theory of relativity, where it is used to calculate various relativistic effects. For instance, the Lorentz factor, denoted by the symbol ‘γ’, is
used to calculate time dilation, length contraction, and relativistic mass increase.
The Lorentz factor is derived from the ratio of an object’s velocity to the speed of light.
Relativistic effects become significant as objects approach the speed of light. These effects have been experimentally verified and are essential for understanding phenomena such as time dilation in
high-speed travel or the behavior of particles in particle accelerators.
Solving problems step-by-step
When encountering problems that involve the speed of light, it is important to break them down step-by-step to ensure accurate calculations. Start by identifying the relevant equations and
substituting the known values.
Then, solve for the unknowns while taking into account the appropriate units and the speed of light.
It is also useful to refer to trusted sources for guidance and examples when working with the speed of light in equations. Websites like physics.info or Khan Academy provide comprehensive
explanations and practice problems to enhance understanding and proficiency in using the speed of light in equations.
Practical Uses of the Speed of Light Constant
GPS satellite synchronization
The speed of light plays a crucial role in the accurate synchronization of GPS satellites. GPS relies on the time it takes for signals to travel between satellites and receivers on the ground. By
knowing the speed of light, GPS systems can calculate the distance between the satellite and the receiver, allowing for precise positioning information.
This technology is widely used in navigation systems, ensuring that we can find our way accurately and efficiently.
Computing distance to stars and galaxies
The speed of light is also instrumental in measuring astronomical distances. Astronomers use a method called “parallax” to determine the distance to stars and galaxies. By measuring the apparent
shift in position of a star as the Earth orbits the Sun, they can calculate the distance based on the known speed of light.
This technique has allowed scientists to map the vast expanse of our universe and gain insights into its structure and evolution.
Future applications in tech and research
The speed of light constant continues to be a key factor in the development of technology and research. As our understanding of light and its properties grows, new applications are being explored.
For example, scientists are investigating the use of light-based communication systems, known as Li-Fi, as a potential alternative to Wi-Fi.
Li-Fi utilizes the speed of light to transmit data, offering faster and more secure connections. Additionally, the speed of light is crucial in the field of quantum computing, where it is used to
transmit information between quantum bits (qubits).
This has the potential to revolutionize computing power and enable breakthroughs in various scientific fields.
Source: NASA
Misconceptions About the Speed of Light
That c can change or be exceeded
One common misconception about the speed of light is that it can be changed or even exceeded. However, according to the theory of relativity proposed by Albert Einstein, the speed of light in a
vacuum, denoted by the symbol c, is an absolute constant.
It is the fastest speed possible in the universe, and nothing can travel faster than it. This means that no matter how much energy is applied or how advanced technology becomes, c remains constant at
approximately 299,792,458 meters per second.
This concept may be difficult to grasp, as we are used to objects having different speeds depending on various factors. However, the speed of light is an exception to this rule and serves as a
fundamental constant in physics.
It plays a crucial role in our understanding of the universe and has been confirmed through numerous experiments and observations.
That light speed is only exact in a vacuum
Another misconception is that the speed of light is only exact in a vacuum. While it is true that the speed of light is slightly slower when passing through a medium such as air, water, or glass, it
is still considered to be a universal constant.
When light travels through a medium, it interacts with the atoms or molecules in that medium, causing a slight delay in its speed. However, these delays are relatively small and do not significantly
affect the overall speed of light.
It is important to note that the speed of light in a vacuum is often used as a reference point in scientific calculations and equations. This is because the speed of light in a vacuum is the maximum
speed possible, and any other speeds are relative to it.
So, even though the speed of light may be slightly slower in certain mediums, it is still considered to be an accurate measure of the fundamental speed limit of the universe.
That c only applies to light, not causality
One misconception that arises from the term “speed of light” is that it only applies to the speed at which light travels. However, the speed of light, denoted by c, is not limited to just light
waves. It is a fundamental constant that applies to all forms of electromagnetic radiation, including radio waves, microwaves, X-rays, and gamma rays.
Furthermore, the speed of light is not only relevant to the propagation of electromagnetic waves but also to the concept of causality. In physics, causality refers to the principle that an effect
cannot occur before its cause.
The constant speed of light plays a crucial role in maintaining this principle. It ensures that information and signals cannot travel faster than light, preventing paradoxes and maintaining the order
of cause and effect in the universe.
While the speed of light is lightning quick at 300 million meters per second, expressing it in scientific notation as c = 3.00 x 108 m/s helps make this huge number more comprehensible. Understanding
the speed of light constant in scientific notation provides a clearer view of its place in physics equations and applications.
Whether you’re an expert or casual science fan, gaining insight into this famous constant sheds light on how our universe operates at the most fundamental scale.
|
{"url":"https://www.jamiefosterscience.com/speed-of-light-science-notation/","timestamp":"2024-11-08T15:43:25Z","content_type":"text/html","content_length":"100990","record_id":"<urn:uuid:c73eff4e-d0c8-42b8-8a92-cff284750744>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00631.warc.gz"}
|
How to time travel
Before I begin, there is one thing you need to understand: every event has a time and a place. An event happening in Sacramento and New York at the same exact time is not the same event because
they’re happening in different places. An event in Sacramento in June is not the same event as one that’s happening in Sacramento in August because they’re at different times. But, if an event
happens at the same time and at the same place, then it’s the same event. Got it?
Some things you should know about Figure 1 below [1]:
• In Figure 1, there's a rectangular box. At the bottom of the box is a light source.
• On the other side of the box is a mirror reflecting that light source.
• There are two events happening in the image. Event A is when the light leaves the light source. Event B is when the light returns to the light source after hitting the mirror.
• The box is on a conveyor belt moving to the right at a constant speed.
As a punishment, you got put inside the box and are stuck in it. What you see is on the left side of the image. Since you're moving with the box, Events A and B occur at the same place: Point W.
We know the speed of light, c, travels at 186,000 miles per second and we also know that speed c is constant and cannot move any faster or slower (this is part of the general Principle of Relativity
Inside the box, you measure the time between event A and event B as taking 0:00:01 seconds.
The right side of the image shows what I see as I look at the box from above the conveyor belt. I’m in charge of making sure you don’t jump out and escape your punishment.
Since the box is moving to the right of me, I observe events A and B occurring at different locations. Event A happens at point X and event B happens at point Y.
Since I see the light as now traveling in a diagonal and therefore having to go a further distance than it would if it were moving in a straight line, I can conclude it took longer for event B to
occur after event A than it did for you. In this case, the time between event A and event B was measured at 0:00:01.07.
Therefore, you experienced time moving slower by .07 seconds than I did.
To clarify: these aren’t two different events. It’s the same event happening at the same time and the same place but viewed from two different reference frames. In the reference frame inside the box,
events A and B happen in the same location. In a reference frame outside of the box, events A and B happen at different locations, namely X and Y.
So you, the person inside the box, are seeing the events occur at the same place and saying 0:00:01 seconds elapsed between the two.
I'm saying 0:00:01.07 seconds elapsed between the two.
This is time dilation.
The formal (yet simple) definition of time dilation is:
The time between two events is shortest when measured in a reference frame from where the two events occur at the same place.
For you, the person in the box that was moving on the conveyor belt relative to me, the light took 0:00:01 seconds to get from event A to event B. That's correct. That's how long it took.
But for me, the person that's not moving relative to the box, I saw the light take a longer path. Therefore, it had to take a longer amount of time than for you, which is why I measured 0:00:01.07
seconds between events A and B.
Therefore, we can conclude that time moved slower for you, and in this example, 0:00:00.07 seconds slower than time moved for me.
Note: The time allotted here is purely for example purposes and not real. I’m using the two times to demonstrate time dilation and not using the actual formula. Using the actual formula, an event
that a stationary observer says took 0.1 seconds, an observer moving at 124,274.2 miles/second would see it take 0.1342385 seconds.
How to time travel
We can use time dilation to time travel.
To illustrate this, let’s look at two co-workers: Stella and Earl.
Stella wants to see what a star looks like in space, so she boards a spaceship and travels 186,000 miles per second and zooms off to a star in our galaxy, and comes back.
To Stella, the trip only takes a few days. But when she comes back, Earl is 20 years older!
Stella time-traveled 20 years into the future.
How did this happen? Time dilation.
Since the two events happened in the same place for Stella (event A is when she left Earth and event B is when she returned) and she was moving at near the speed of light, she experienced time moving
more slowly than Earl did, which is why the trip took only a few days. For Earl, the trip took 20 years!
This thought experiment was demonstrated by a test conducted in the 70s with atomic clocks.
Named the Hafele–Keating experiment, after scientists Joseph C. Hafele and Richard E. Keating, researchers sent atomic clocks around the world aboard commercial airliners. When they returned, they
compared their results to atomic clocks that stayed resting on the ground.
Here are the results:
As an example (it wasn’t really this dramatic), the clock on the ground elapsed 10 hours between takeoff and landing while the clock on the plane only elapsed 9 hours between the same takeoff and the
same landing. In reality, the clocks gained about 0.15 microseconds compared to the stationary clock [2].
The clock would then be 1 hour "into the future."
This is how Stella did time traveled.
The clocks (and Stella) experienced a change in gravity as they got further from the center of Earth’s gravitational pull. As gravity decreases the further something gets in the atmosphere, the
faster time will move.
Counter-acting that effect is special relativity’s time dilation caused by velocity, which is what we experienced with the boxes. The faster one’s relative velocity is compared to a stationary
observer, the slower it will experience time in a reference frame from the observer.
Therefore, if you had a spaceship that could travel near the speed of light, you could time travel because you’d be moving so fast that time essentially comes to a halt [3]. Hopefully, you like what
you discover, because you couldn’t ever come back.
Note: Time dilation effects aren’t noticeable unless you’re traveling at or near the speed of light. This is why it’s not something you have to deal with on a day-to-day basis and also why the
effects were so minimal in the Hafele–Keating experiment.
Note 2: I am no physicist. I’m a curious person who reads books and tries to understand things and in the process, teach myself and try to teach others. I may have gotten some details wrong, but I
tried to be certain of what I published. As I understand time dilation, and it’s a simple understanding I have, this is how it would work. If I’m wrong about something, please correct me and point me
somewhere I can learn about my mistakes.
|
{"url":"https://www.dltn.io/posts/how-to-time-travel","timestamp":"2024-11-11T10:46:08Z","content_type":"text/html","content_length":"34582","record_id":"<urn:uuid:0c350730-e952-4033-8dc7-72462376f782>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00137.warc.gz"}
|
Core Maths: UCAS Tariff points, performance tables, funding and discounting
Do Core Maths qualifications attract UCAS Tariff points?
Core Maths qualifications attract the same UCAS tariff points as an AS Level:
Are these Level 3 qualifications included in performance tables or the post-16 condition of funding?
Core Maths qualifications are approved for the:
• ‘Level 3 Maths’ measure for the 16-19 performance tables
• Level 3 Maths qualification for Techbacc
• Level 3 Maths qualification for the post-16 condition of funding
Can students take both AS Level Maths and Core Maths at the same time?
The only entry exclusions are that candidates cannot be entered for Core Maths B in the same examination series as Core Maths A and vice versa.
Core Maths qualifications are suitable for those:
• with a grade 4 or C, or above, in GCSE Maths at age 16
• not taking AS or A Level maths or an International Baccalaureate (IB) maths certificate
Therefore we don’t expect centres to deliver Core Maths alongside AS or A Level Maths. Core Maths exams are timetabled in the same slots as the two AS Level Maths exams. If centres have candidates
sitting both AS Level Maths and Core Maths they would need to follow the guidance given by the Joint Council for Qualifications (JCQ) for timetable clashes.
16-18 performance points would be available for both qualifications as the discount code for Core Maths is different to AS and A Level Maths.
0 comments
Article is closed for comments.
|
{"url":"https://support.ocr.org.uk/hc/en-gb/articles/9913167190674-Core-Maths-UCAS-Tariff-points-performance-tables-funding-and-discounting","timestamp":"2024-11-06T20:21:21Z","content_type":"text/html","content_length":"24743","record_id":"<urn:uuid:72fd605d-6f8e-4db9-ad5b-cf7fc40bafe7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00106.warc.gz"}
|
ATOM Volume III: Problems for Mathematical Leagues
This volume contains a selection of some of the problems that have been used in the Newfoundland and Labrador Senior Mathematics League, which is sponsored the the Newfoundland and Labrador Teachers
Association Mathematics Special Interest Council. The support of many teachers and schools is gratefully acknowledged.
We also acknowledge with thanks the assistance from the staff of the Department of Mathematics and Statistics, especially Ros English, Wanda Heath, Menie Kavanagh and Leonce Morrissey, in the
preparation of this material.
Many of the problems in the booklet admit several approaches. As opposed to our earlier 1995 book of problems, Shaking Hands in Corner Brook, available from the Waterloo Mathematics Foundation, this
booklet contains no solutions, only answers. Also, the problems are arranged in the form in which we use them in games. We hope that this will be of use to other groups running Mathematics
|
{"url":"https://cms.math.ca/book/atom-volume-iii-problems-for-mathematical-leagues/","timestamp":"2024-11-06T05:43:00Z","content_type":"text/html","content_length":"149719","record_id":"<urn:uuid:4cc52fb9-fc83-4b51-a032-7e7cff346fc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00208.warc.gz"}
|
Enviromental Engineering
1.Water Demand Fire Demand Kuichhlings Formula Freeman Formula National Board of fire Under Writer Formula Buston's Formula Per Capita Demand (q) Assessment of Normal variation Populating Forecasting
Methods 2.Conduits for transpoting water Head loss Darcy-Weisbatch formula Hazen-William's Formula Modified Hazen-William's Formula Mannings Formula pressure Acting on Pressure Conduits Hoop tension
or Circumferential tension(σ) Water Hammer Pressure 3.Development of ground water Darcy law's Speccific law's Specific yield Specific retension Slot opening Pack Aquifer Ratio (P.A) Well Losses
4.Quality Control of water Supplies Total Solid Suspended Solid Theshold odour number Consistituents of Alkalinity PH Value of water Hardness of water Biochemical oxygen demand (BOD) 5.Water
treatment Theory of sedimentation Sedimentation tank Chemical use for coagulation Mixing Basin Flocculation Fileration Rapid sand filter Hydraullic of Sand gravity filters Hydraullic head loss
expansion of the filter during backwash Disinfection or Strilization Type of Chlorination Starch iodide test Water Softning 6.Design of Sewere Hydraulic Formulaas for Determining Flow Velocities in
Sewers and drains Chezy's Formula Kutter formula Bazin formula Manning's Formula Crimp and Burge's Formula William Hazen's Formula Shields Expression for self Cleaning velocity Hydaulic
Characteristic of Circular Sewer Sewer is designed maximum hourly discharge 7.Quality & Characterstics of Sewage Aerobic Decomposition Anaerobic Decoposition Threshold Odour Number (Ton) Total
Solids,Suspended Solids and settleable Solids Chemical Oxygen Demand Biochemical Oxygen Demand 8.Disposing of The Sewage Effluents Standard of Dilution for discharge of wastewaters into rivers
Dilution and Dispersion Zone of Pollution in river stream Relative Stability (S) Stretcher-PHELPS equation 9.Treatment of sewage Settling Velocity Proportional Flow weir Prabolically or V Shaped Grit
Chamber Provide with Parshall flume Skimming tank Sedimentation tank Trickling Filter Sludge Digestion tank Destruction and Removal Efficiency (DRE) Aeration tank 10.Air and sound pollution Primary
Pollutants Secondary Pollutants Wind Speed Plume Characterstic of sound and its mesurement power of sound Noise Rating System Cpcb Standards of noise levels (dB) Ambient air quality standards dB
|
{"url":"https://www.sudhanshucivil2010.com/post/enviromental-engineering","timestamp":"2024-11-04T02:21:49Z","content_type":"text/html","content_length":"1050497","record_id":"<urn:uuid:19f20058-24f4-4fce-966a-9b66bdfa7ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00709.warc.gz"}
|
An empty vial weighs 50.90 g. If the vial weighs 439.31 g when filled with liquid mercury (d = 13.53 g/cm^3), what volume of mercury is in the vial? | Socratic
An empty vial weighs 50.90 g. If the vial weighs 439.31 g when filled with liquid mercury (d = 13.53 g/cm^3), what volume of mercury is in the vial?
1 Answer
Your strategy here will be to use the mass of the empty vial and the mass of the vial when filled with mercury to find the mass of mercury added to the vial.
Once you know the mass of mercury, you can use its density to find the volume of mercury present in the vial, i.e. the volume of the vial.
So, you know that the vial + mercury has a mass of
$\text{vial " + " mercury" = "439.31 g}$
The mass of the vial is
$\text{vial " = " 50.90 g}$
This means that the mass of the mercury will be
$\text{mercury" = "439.31 g" - "50.90 g" = "388.41 g}$
Now, you know that mercury has a density of ${\text{13.53 g/cm}}^{3}$. What this means is that ${\text{1 cm}}^{3}$ of mercury has a mass of $\text{13.53 g}$.
In your case, the volume of the sample will be
#388.41 color(red)(cancel(color(black)("g"))) * "1 cm"^3/(13.53color(red)(cancel(color(black)("g")))) = color(green)(bar(ul(|color(white)(a/a)color(black)("28.71 cm"^3)color(white)(a/a)|)))#
The answer is rounded to four sig figs, the number of sig figs you have for the mass of the empty vial.
Impact of this question
16083 views around the world
|
{"url":"https://socratic.org/questions/an-empty-vial-weighs-50-90-g-if-the-vial-weighs-439-31-g-when-filled-with-liquid","timestamp":"2024-11-04T11:54:47Z","content_type":"text/html","content_length":"35728","record_id":"<urn:uuid:faf21b0e-3fb2-4a02-bf46-1924ff8dd339>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00818.warc.gz"}
|
Eigenvector Centrality - Neo4j Graph Data Science
Eigenvector Centrality
Eigenvector Centrality is an algorithm that measures the transitive influence of nodes. Relationships originating from high-scoring nodes contribute more to the score of a node than connections from
low-scoring nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores.
The algorithm computes the eigenvector associated with the largest absolute eigenvalue. To compute that eigenvalue, the algorithm applies the power iteration approach. Within each iteration, the
centrality score for each node is derived from the scores of its incoming neighbors. In the power iteration method, the eigenvector is L2-normalized after each iteration, leading to normalized
results by default.
The PageRank algorithm is a variant of Eigenvector Centrality with an additional jump probability.
There are some things to be aware of when using the Eigenvector centrality algorithm:
• Centrality scores for nodes with no incoming relationships will converge to 0.
• Due to missing degree normalization, high-degree nodes have a very strong influence on their neighbors' score.
This section covers the syntax used to execute the Eigenvector Centrality algorithm in each of its execution modes. We are describing the named graph variant of the syntax. To learn more about
general syntax variants, see Syntax overview.
Eigenvector Centrality syntax per mode
Run Eigenvector Centrality in stream mode on a named graph.
CALL gds.eigenvector.stream(
graphName: String,
configuration: Map
nodeId: Integer,
score: Float
Table 1. Parameters
Name Type Default Optional Description
graphName String n/a no The name of a graph stored in the catalog.
configuration Map {} yes Configuration for algorithm-specifics and/or graph filtering.
Table 2. Configuration
Name Type Default Optional Description
nodeLabels List of String ['*'] yes Filter the named graph using the given node labels. Nodes with any of the given labels will be included.
relationshipTypes List of String ['*'] yes Filter the named graph using the given relationship types. Relationships with any of the given types will be included.
concurrency Integer 4 yes The number of concurrent threads used for running the algorithm.
jobId String Generated yes An ID that can be provided to more easily track the algorithm’s progress.
logProgress Boolean true yes If disabled the progress percentage will not be logged.
maxIterations Integer 20 yes The maximum number of iterations of Eigenvector Centrality to run.
tolerance Float 0.0000001 yes Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and
the algorithm returns.
relationshipWeightProperty String null yes Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted.
sourceNodes List or Node or [] yes The nodes or node ids to use for computing Personalized Page Rank.
scaler String or Map None yes The name of the scaler applied for the final scores. Supported values are None, MinMax, Max, Mean, Log, and StdScore. To apply
scaler-specific configuration, use the Map syntax: {scaler: 'name', …}.
Table 3. Results
Name Type Description
nodeId Integer Node ID.
score Float Eigenvector score.
Run Eigenvector Centrality in stats mode on a named graph.
CALL gds.eigenvector.stats(
graphName: String,
configuration: Map
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
centralityDistribution: Map,
configuration: Map
Table 4. Parameters
Name Type Default Optional Description
graphName String n/a no The name of a graph stored in the catalog.
configuration Map {} yes Configuration for algorithm-specifics and/or graph filtering.
Table 5. Configuration
Name Type Default Optional Description
nodeLabels List of String ['*'] yes Filter the named graph using the given node labels. Nodes with any of the given labels will be included.
relationshipTypes List of String ['*'] yes Filter the named graph using the given relationship types. Relationships with any of the given types will be included.
concurrency Integer 4 yes The number of concurrent threads used for running the algorithm.
jobId String Generated yes An ID that can be provided to more easily track the algorithm’s progress.
logProgress Boolean true yes If disabled the progress percentage will not be logged.
maxIterations Integer 20 yes The maximum number of iterations of Eigenvector Centrality to run.
tolerance Float 0.0000001 yes Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and
the algorithm returns.
relationshipWeightProperty String null yes Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted.
sourceNodes List or Node or [] yes The nodes or node ids to use for computing Personalized Page Rank.
scaler String or Map None yes The name of the scaler applied for the final scores. Supported values are None, MinMax, Max, Mean, Log, and StdScore. To apply
scaler-specific configuration, use the Map syntax: {scaler: 'name', …}.
Table 6. Results
Name Type Description
ranIterations Integer The number of iterations run.
didConverge Boolean Indicates if the algorithm converged.
preProcessingMillis Integer Milliseconds for preprocessing the graph.
computeMillis Integer Milliseconds for running the algorithm.
postProcessingMillis Integer Milliseconds for computing the centralityDistribution.
centralityDistribution Map Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values.
configuration Map The configuration used for running the algorithm.
Run Eigenvector Centrality in mutate mode on a named graph.
CALL gds.eigenvector.mutate(
graphName: String,
configuration: Map
nodePropertiesWritten: Integer,
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
mutateMillis: Integer,
centralityDistribution: Map,
configuration: Map
Table 7. Parameters
Name Type Default Optional Description
graphName String n/a no The name of a graph stored in the catalog.
configuration Map {} yes Configuration for algorithm-specifics and/or graph filtering.
Table 8. Configuration
Name Type Default Optional Description
mutateProperty String n/a no The node property in the GDS graph to which the score is written.
nodeLabels List of String ['*'] yes Filter the named graph using the given node labels.
relationshipTypes List of String ['*'] yes Filter the named graph using the given relationship types.
concurrency Integer 4 yes The number of concurrent threads used for running the algorithm.
jobId String Generated yes An ID that can be provided to more easily track the algorithm’s progress.
maxIterations Integer 20 yes The maximum number of iterations of Eigenvector Centrality to run.
tolerance Float 0.0000001 yes Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and
the algorithm returns.
relationshipWeightProperty String null yes Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted.
sourceNodes List or Node or [] yes The nodes or node ids to use for computing Personalized Page Rank.
scaler String or Map None yes The name of the scaler applied for the final scores. Supported values are None, MinMax, Max, Mean, Log, and StdScore. To apply
scaler-specific configuration, use the Map syntax: {scaler: 'name', …}.
Table 9. Results
Name Type Description
ranIterations Integer The number of iterations run.
didConverge Boolean Indicates if the algorithm converged.
preProcessingMillis Integer Milliseconds for preprocessing the graph.
computeMillis Integer Milliseconds for running the algorithm.
postProcessingMillis Integer Milliseconds for computing the centralityDistribution.
mutateMillis Integer Milliseconds for adding properties to the in-memory graph.
nodePropertiesWritten Integer The number of properties that were written to the in-memory graph.
centralityDistribution Map Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values.
configuration Map The configuration used for running the algorithm.
Run Eigenvector Centrality in write mode on a named graph.
CALL gds.eigenvector.write(
graphName: String,
configuration: Map
nodePropertiesWritten: Integer,
ranIterations: Integer,
didConverge: Boolean,
preProcessingMillis: Integer,
computeMillis: Integer,
postProcessingMillis: Integer,
writeMillis: Integer,
centralityDistribution: Map,
configuration: Map
Table 10. Parameters
Name Type Default Optional Description
graphName String n/a no The name of a graph stored in the catalog.
configuration Map {} yes Configuration for algorithm-specifics and/or graph filtering.
Table 11. Configuration
Name Type Default Optional Description
nodeLabels List of String ['*'] yes Filter the named graph using the given node labels. Nodes with any of the given labels will be included.
relationshipTypes List of String ['*'] yes Filter the named graph using the given relationship types. Relationships with any of the given types will be included.
concurrency Integer 4 yes The number of concurrent threads used for running the algorithm.
jobId String Generated yes An ID that can be provided to more easily track the algorithm’s progress.
logProgress Boolean true yes If disabled the progress percentage will not be logged.
writeConcurrency Integer value of yes The number of concurrent threads used for writing the result to Neo4j.
writeProperty String n/a no The node property in the Neo4j database to which the score is written.
maxIterations Integer 20 yes The maximum number of iterations of Eigenvector Centrality to run.
tolerance Float 0.0000001 yes Minimum change in scores between iterations. If all scores change less than the tolerance value the result is considered stable and
the algorithm returns.
relationshipWeightProperty String null yes Name of the relationship property to use as weights. If unspecified, the algorithm runs unweighted.
sourceNodes List or Node or [] yes The nodes or node ids to use for computing Personalized Page Rank.
scaler String or Map None yes The name of the scaler applied for the final scores. Supported values are None, MinMax, Max, Mean, Log, and StdScore. To apply
scaler-specific configuration, use the Map syntax: {scaler: 'name', …}.
Table 12. Results
Name Type Description
ranIterations Integer The number of iterations run.
didConverge Boolean Indicates if the algorithm converged.
preProcessingMillis Integer Milliseconds for preprocessing the graph.
computeMillis Integer Milliseconds for running the algorithm.
postProcessingMillis Integer Milliseconds for computing the centralityDistribution.
writeMillis Integer Milliseconds for writing result data back.
nodePropertiesWritten Integer The number of properties that were written to Neo4j.
centralityDistribution Map Map containing min, max, mean as well as p50, p75, p90, p95, p99 and p999 percentile values of centrality values.
configuration Map The configuration used for running the algorithm.
All the examples below should be run in an empty database.
The examples use Cypher projections as the norm. Native projections will be deprecated in a future release.
In this section we will show examples of running the Eigenvector Centrality algorithm on a concrete graph. The intention is to illustrate what the results look like and to provide a guide in how to
make use of the algorithm in a real setting. We will do this on a small web network graph of a handful nodes connected in a particular pattern. The example graph looks like this:
The following Cypher statement will create the example graph in the Neo4j database:
(home:Page {name:'Home'}),
(about:Page {name:'About'}),
(product:Page {name:'Product'}),
(links:Page {name:'Links'}),
(a:Page {name:'Site A'}),
(b:Page {name:'Site B'}),
(c:Page {name:'Site C'}),
(d:Page {name:'Site D'}),
(home)-[:LINKS {weight: 0.2}]->(about),
(home)-[:LINKS {weight: 0.2}]->(links),
(home)-[:LINKS {weight: 0.6}]->(product),
(about)-[:LINKS {weight: 1.0}]->(home),
(product)-[:LINKS {weight: 1.0}]->(home),
(a)-[:LINKS {weight: 1.0}]->(home),
(b)-[:LINKS {weight: 1.0}]->(home),
(c)-[:LINKS {weight: 1.0}]->(home),
(d)-[:LINKS {weight: 1.0}]->(home),
(links)-[:LINKS {weight: 0.8}]->(home),
(links)-[:LINKS {weight: 0.05}]->(a),
(links)-[:LINKS {weight: 0.05}]->(b),
(links)-[:LINKS {weight: 0.05}]->(c),
(links)-[:LINKS {weight: 0.05}]->(d);
This graph represents eight pages, linking to one another. Each relationship has a property called weight, which describes the importance of the relationship.
The following statement will project a graph using a Cypher projection and store it in the graph catalog under the name 'myGraph'.
MATCH (source:Page)-[r:LINKS]->(target:Page)
RETURN gds.graph.project(
{ relationshipProperties: r { .weight } }
Memory Estimation
First off, we will estimate the cost of running the algorithm using the estimate procedure. This can be done with any execution mode. We will use the write mode in this example. Estimating the
algorithm is useful to understand the memory impact that running the algorithm on your graph will have. When you later actually run the algorithm in one of the execution modes the system will perform
an estimation. If the estimation shows that there is a very high probability of the execution going over its memory limitations, the execution is prohibited. To read more about this, see Automatic
estimation and execution blocking.
The following will estimate the memory requirements for running the algorithm:
CALL gds.eigenvector.write.estimate('myGraph', {
writeProperty: 'centrality',
maxIterations: 20
YIELD nodeCount, relationshipCount, bytesMin, bytesMax, requiredMemory
Table 13. Results
nodeCount relationshipCount bytesMin bytesMax requiredMemory
8 14 696 696 "696 Bytes"
In the stream execution mode, the algorithm returns the score for each node. This allows us to inspect the results directly or post-process them in Cypher without any side effects. For example, we
can order the results to find the nodes with the highest Eigenvector score.
For more details on the stream mode in general, see Stream.
The following will run the algorithm in stream mode:
CALL gds.eigenvector.stream('myGraph')
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
Table 14. Results
name score
"Home" 0.7465574981728249
"About" 0.33997520529777137
"Links" 0.33997520529777137
"Product" 0.33997520529777137
"Site A" 0.15484062876886298
"Site B" 0.15484062876886298
"Site C" 0.15484062876886298
"Site D" 0.15484062876886298
The above query is running the algorithm in stream mode as unweighted. Below, one can find an example for weighted graphs.
In the stats execution mode, the algorithm returns a single row containing a summary of the algorithm result. For example Eigenvector stats returns centrality histogram which can be used to monitor
the distribution of centrality scores across all computed nodes. This execution mode does not have any side effects. It can be useful for evaluating algorithm performance by inspecting the
computeMillis return item. In the examples below we will omit returning the timings. The full signature of the procedure can be found in the syntax section.
For more details on the stats mode in general, see Stats.
The following will run the algorithm and return statistics about the centrality scores.
CALL gds.eigenvector.stats('myGraph', {
maxIterations: 20
YIELD centralityDistribution
RETURN centralityDistribution.max AS max
Table 15.
The mutate execution mode extends the stats mode with an important side effect: updating the named graph with a new node property containing the score for that node. The name of the new property is
specified using the mandatory configuration parameter mutateProperty. The result is a single summary row, similar to stats, but with some additional metrics. The mutate mode is especially useful when
multiple algorithms are used in conjunction.
For more details on the mutate mode in general, see Mutate.
The following will run the algorithm in mutate mode:
CALL gds.eigenvector.mutate('myGraph', {
maxIterations: 20,
mutateProperty: 'centrality'
YIELD nodePropertiesWritten, ranIterations
Table 16. Results
nodePropertiesWritten ranIterations
The write execution mode extends the stats mode with an important side effect: writing the score for each node as a property to the Neo4j database. The name of the new property is specified using the
mandatory configuration parameter writeProperty. The result is a single summary row, similar to stats, but with some additional metrics. The write mode enables directly persisting the results to the
For more details on the write mode in general, see Write.
The following will run the algorithm in write mode:
CALL gds.eigenvector.write('myGraph', {
maxIterations: 20,
writeProperty: 'centrality'
YIELD nodePropertiesWritten, ranIterations
Table 17. Results
nodePropertiesWritten ranIterations
By default, the algorithm considers the relationships of the graph to be unweighted. To change this behaviour, we can use the relationshipWeightProperty configuration parameter. If the parameter is
set, the associated property value is used as relationship weight. In the weighted case, the previous score of a node sent to its neighbors is multiplied by the normalized relationship weight. Note,
that negative relationship weights are ignored during the computation.
In the following example, we use the weight property of the input graph as relationship weight property.
The following will run the algorithm in stream mode using relationship weights:
CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
relationshipWeightProperty: 'weight'
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
Table 18. Results
name score
"Home" 0.8328163407319487
"Product" 0.5004775834976313
"About" 0.1668258611658771
"Links" 0.1668258611658771
"Site A" 0.008327591469710233
"Site B" 0.008327591469710233
"Site C" 0.008327591469710233
"Site D" 0.008327591469710233
As in the unweighted example, the "Home" node has the highest score. In contrast, the "Product" now has the second highest instead of the fourth highest score.
We are using stream mode to illustrate running the algorithm as weighted, however, all the algorithm modes support the relationshipWeightProperty configuration parameter.
The tolerance configuration parameter denotes the minimum change in scores between iterations. If all scores change less than the configured tolerance, the iteration is aborted and considered
converged. Note, that setting a higher tolerance leads to earlier convergence, but also to less accurate centrality scores.
The following will run the algorithm in stream mode using a high tolerance value:
CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
tolerance: 0.1
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
Table 19. Results
name score
"Home" 0.7108273818583551
"About" 0.3719400001993262
"Links" 0.3719400001993262
"Product" 0.3719400001993262
"Site A" 0.14116155811301126
"Site B" 0.14116155811301126
"Site C" 0.14116155811301126
"Site D" 0.14116155811301126
We are using tolerance: 0.1, which leads to slightly different results compared to the stream example. However, the computation converges after three iterations, and we can already observe a trend in
the resulting scores.
Personalised Eigenvector Centrality
Personalized Eigenvector Centrality is a variation of Eigenvector Centrality which is biased towards a set of sourceNodes. By default, the power iteration starts with the same value for all nodes: 1
/ |V|. For a given set of source nodes S, the initial value of each source node is set to 1 / |S| and to 0 for all remaining nodes.
The following examples show how to run Eigenvector centrality centered around 'Site A'.
The following will run the algorithm and stream results:
MATCH (siteA:Page {name: 'Site A'}), (siteB:Page {name: 'Site B'})
CALL gds.eigenvector.stream('myGraph', {
maxIterations: 20,
sourceNodes: [siteA, siteB]
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
Table 20. Results
name score
"Home" 0.7465645391567868
"About" 0.33997203172449453
"Links" 0.33997203172449453
"Product" 0.33997203172449453
"Site A" 0.15483736775159632
"Site B" 0.15483736775159632
"Site C" 0.15483736775159632
"Site D" 0.15483736775159632
Scaling centrality scores
Internally, centrality scores are scaled after each iteration using L2 normalization. As a consequence, the final values are already normalized. This behavior cannot be changed as it is part of the
power iteration method.
However, to normalize the final scores as part of the algorithm execution, one can use the scaler configuration parameter. A description of all available scalers can be found in the documentation for
the scaleProperties procedure.
The following will run the algorithm in stream mode and returns normalized results:
CALL gds.eigenvector.stream('myGraph', {
scaler: "MINMAX"
YIELD nodeId, score
RETURN gds.util.asNode(nodeId).name AS name, score
ORDER BY score DESC, name ASC
Table 21. Results
name score
"Home" 1.0
"About" 0.312876962110942
"Links" 0.312876962110942
"Product" 0.312876962110942
"Site A" 0.0
"Site B" 0.0
"Site C" 0.0
"Site D" 0.0
Comparing the results with the stream example, we can see that the relative order of scores is the same.
|
{"url":"https://neo4j.com/docs/graph-data-science/current/algorithms/eigenvector-centrality/","timestamp":"2024-11-07T05:48:02Z","content_type":"text/html","content_length":"151675","record_id":"<urn:uuid:f7c60663-e3b3-4ef5-ae63-b493d022fc51>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00164.warc.gz"}
|
CS3 Data Structures & Algorithms - BC and Slides -
12.5. 2-3 Trees¶
12.5.1. 2-3 Trees¶
This section presents a data structure called the 2-3 tree. The 2-3 tree is not a binary tree, but instead its shape obeys the following definition:
1. A node contains one or two keys.
2. Every internal node has either two children (if it contains one key) or three children (if it contains two keys). Hence the name.
3. All leaves are at the same level in the tree, so the tree is always height balanced.
In addition to these shape properties, the 2-3 tree has a search tree property analogous to that of a BST. For every node, the values of all descendants in the left subtree are less than the value of
the first key, while values in the center subtree are greater than or equal to the value of the first key. If there is a right subtree (equivalently, if the node stores two keys), then the values of
all descendants in the center subtree are less than the value of the second key, while values in the right subtree are greater than or equal to the value of the second key. To maintain these shape
and search properties requires that special action be taken when nodes are inserted and deleted. The 2-3 tree has the advantage over the BST in that the 2-3 tree can be kept height balanced at
relatively low cost. Here is an example 2-3 tree.
Nodes are shown as rectangular boxes with two key fields. (These nodes actually would contain complete records or pointers to complete records, but the figures will show only the keys.) Internal
nodes with only two children have an empty right key field. Leaf nodes might contain either one or two keys. Here is an implementation for the 2-3 tree node class.
// 2-3 tree node implementation
class TTNode<Key extends Comparable<? super Key>,E> {
private E lval; // The left record
private Key lkey; // The node's left key
private E rval; // The right record
private Key rkey; // The node's right key
private TTNode<Key,E> left; // Pointer to left child
private TTNode<Key,E> center; // Pointer to middle child
private TTNode<Key,E> right; // Pointer to right child
public TTNode() { center = left = right = null; }
public TTNode(Key lk, E lv, Key rk, E rv,
TTNode<Key,E> p1, TTNode<Key,E> p2,
TTNode<Key,E> p3) {
lkey = lk; rkey = rk;
lval = lv; rval = rv;
left = p1; center = p2; right = p3;
public boolean isLeaf() { return left == null; }
public TTNode<Key,E> lchild() { return left; }
public TTNode<Key,E> rchild() { return right; }
public TTNode<Key,E> cchild() { return center; }
public Key lkey() { return lkey; } // Left key
public E lval() { return lval; } // Left value
public Key rkey() { return rkey; } // Right key
public E rval() { return rval; } // Right value
public void setLeft(Key k, E e) { lkey = k; lval = e; }
public void setRight(Key k, E e) { rkey = k; rval = e; }
public void setLeftChild(TTNode<Key,E> it) { left = it; }
public void setCenterChild(TTNode<Key,E> it)
{ center = it; }
public void setRightChild(TTNode<Key,E> it)
{ right = it; }
Note that this sample declaration does not distinguish between leaf and internal nodes and so is space inefficient, because leaf nodes store three pointers each. We can use a class hierarcy to
implement separate internal and leaf node types.
From the defining rules for 2-3 trees we can derive relationships between the number of nodes in the tree and the depth of the tree. A 2-3 tree of height \(k\) has at least \(2^{k-1}\) leaves,
because if every internal node has two children it degenerates to the shape of a complete binary tree. A 2-3 tree of height \(k\) has at most \(3^{k-1}\) leaves, because each internal node can have
at most three children.
Searching for a value in a 2-3 tree is similar to searching in a BST. Search begins at the root. If the root does not contain the search key \(K\), then the search progresses to the only subtree that
can possibly contain \(K\). The value(s) stored in the root node determine which is the correct subtree. For example, if searching for the value 30 in the tree of Figure 12.5.1, we begin with the
root node. Because 30 is between 18 and 33, it can only be in the middle subtree. Searching the middle child of the root node yields the desired record. If searching for 15, then the first step is
again to search the root node. Because 15 is less than 18, the first (left) branch is taken. At the next level, we take the second branch to the leaf node containing 15. If the search key were 16,
then upon encountering the leaf containing 15 we would find that the search key is not in the tree. Here is an implementation for the 2-3 tree search method.
private E findhelp(TTNode<Key,E> root, Key k) {
if (root == null) { return null; } // val not found
if (k.compareTo(root.lkey()) == 0) { return root.lval(); }
if ((root.rkey() != null) && (k.compareTo(root.rkey())
== 0))
{ return root.rval(); }
if (k.compareTo(root.lkey()) < 0) { // Search left
return findhelp(root.lchild(), k);
else if (root.rkey() == null) { // Search center
return findhelp(root.cchild(), k);
else if (k.compareTo(root.rkey()) < 0) { // Search center
return findhelp(root.cchild(), k);
else { return findhelp(root.rchild(), k); } // Search right
Insertion into a 2-3 tree is similar to insertion into a BST to the extent that the new record is placed in the appropriate leaf node. Unlike BST insertion, a new child is not created to hold the
record being inserted, that is, the 2-3 tree does not grow downward. The first step is to find the leaf node that would contain the record if it were in the tree. If this leaf node contains only one
value, then the new record can be added to that node with no further modification to the tree, as illustrated in the following visualization.
If we insert the new record into a leaf node \(L\) that already contains two records, then more space must be created. Consider the two records of node \(L\) and the record to be inserted without
further concern for which two were already in \(L\) and which is the new record. The first step is to split \(L\) into two nodes. Thus, a new node—call it \(L'\)—must be created from free store. \(L
\) receives the record with the least of the three key values. \(L'\) receives the greatest of the three. The record with the middle of the three key value is passed up to the parent node along with
a pointer to \(L'\). This is called a promotion. The promoted key is then inserted into the parent. If the parent currently contains only one record (and thus has only two children), then the
promoted record and the pointer to \(L'\) are simply added to the parent node. If the parent is full, then the split-and-promote process is repeated. Here is an example of a a simple promotion.
Here is an illustration for what happens when promotions require the root to split, adding a new level to the tree. Note that all leaf nodes continue to have equal depth.
Here is an implementation for the insertion process.
private TTNode<Key,E> inserthelp(TTNode<Key,E> rt, Key k, E e) {
TTNode<Key,E> retval;
if (rt == null) {// Empty tree: create a leaf node for root
return new TTNode<Key,E>(k, e, null, null, null, null, null);
if (rt.isLeaf()) { // At leaf node: insert here
return rt.add(new TTNode<Key,E>(k, e, null, null, null, null, null));
// Add to internal node
if (k.compareTo(rt.lkey()) < 0) { // Insert left
retval = inserthelp(rt.lchild(), k, e);
if (retval == rt.lchild()) { return rt; }
else { return rt.add(retval); }
else if((rt.rkey() == null) || (k.compareTo(rt.rkey()) < 0)) {
retval = inserthelp(rt.cchild(), k, e);
if (retval == rt.cchild()) { return rt; }
else { return rt.add(retval); }
else { // Insert right
retval = inserthelp(rt.rchild(), k, e);
if (retval == rt.rchild()) { return rt; }
else { return rt.add(retval); }
// Add a new key/value pair to the node. There might be a subtree
// associated with the record being added. This information comes
// in the form of a 2-3 tree node with one key and a (possibly null)
// subtree through the center pointer field.
public TTNode<Key,E> add(TTNode<Key,E> it) {
if (rkey == null) { // Only one key, add here
if (lkey.compareTo(it.lkey()) < 0) {
rkey = it.lkey(); rval = it.lval();
center = it.lchild(); right = it.cchild();
else {
rkey = lkey; rval = lval; right = center;
lkey = it.lkey(); lval = it.lval();
center = it.cchild();
return this;
else if (lkey.compareTo(it.lkey()) >= 0) { // Add left
TTNode<Key,E> N1 = new TTNode<Key,E>(lkey, lval, null, null, it, this, null);
left = center; center = right; right = null;
lkey = rkey; lval = rval; rkey = null; rval = null;
return N1;
else if (rkey.compareTo(it.lkey()) >= 0) { // Add center
it.setCenterChild(new TTNode<Key,E>(rkey, rval, null, null, it.cchild(), right, null));
rkey = null; rval = null; right = null;
return it;
else { // Add right
TTNode<Key,E> N1 = new TTNode<Key,E>(rkey, rval, null, null, this, it, null);
right = null; rkey = null; rval = null;
return N1;
Note that inserthelp takes three parameters. The first is a pointer to the root of the current subtree, named rt. The second is the key for the record to be inserted, and the third is the record
itself. The return value for inserthelp is a pointer to a 2-3 tree node. If rt is unchanged, then a pointer to rt is returned. If rt is changed (due to the insertion causing the node to split), then
a pointer to the new subtree root is returned, with the key value and record value in the leftmost fields, and a pointer to the (single) subtree in the center pointer field. This revised node will
then be added to the parent as illustrated by the splitting visualization above.
When deleting a record from the 2-3 tree, there are three cases to consider. The simplest occurs when the record is to be removed from a leaf node containing two records. In this case, the record is
simply removed, and no other nodes are affected. The second case occurs when the only record in a leaf node is to be removed. The third case occurs when a record is to be removed from an internal
node. In both the second and the third cases, the deleted record is replaced with another that can take its place while maintaining the correct order, similar to removing a node from a BST. If the
tree is sparse enough, there is no such record available that will allow all nodes to still maintain at least one record. In this situation, sibling nodes are merged together. The delete operation
for the 2-3 tree is excessively complex and will not be described further. Instead, a complete discussion of deletion will be postponed until the next section, where it can be generalized for a
particular variant of the B-tree.
The 2-3 tree insert and delete routines do not add new nodes at the bottom of the tree. Instead they cause leaf nodes to split or merge, possibly causing a ripple effect moving up the tree to the
root. If necessary the root will split, causing a new root node to be created and making the tree one level deeper. On deletion, if the last two children of the root merge, then the root node is
removed and the tree will lose a level. In either case, all leaf nodes are always at the same level. When all leaf nodes are at the same level, we say that a tree is height balanced. Because the 2-3
tree is height balanced, and every internal node has at least two children, we know that the maximum depth of the tree is \(\log n\). Thus, all 2-3 tree insert, find, and delete operations require \
(\Theta(\log n)\) time.
Click here for another visualization that will let you construct and interact with a 2-3 tree. Actually, this visualization is for a data structure that is more general than just a 2-3 tree. To see
how a 2-3 would behave, be sure to use the “Max Degree = 3” setting. This visualization was written by David Galles of the University of San Francisco as part of his Data Structure Visualizations
|
{"url":"https://opendsa-server.cs.vt.edu/ODSA/Books/usek/gin231-202310/fall-2022/TR_930am/html/TwoThreeTree.html","timestamp":"2024-11-02T15:29:06Z","content_type":"text/html","content_length":"54409","record_id":"<urn:uuid:bdea32a7-ba18-49a7-a5bd-fa7a448a1ff5>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00277.warc.gz"}
|
Practical Integrator: Circuit and Working - Nerds Do Stuff
Practical Integrator: Circuit and Working
What is Practical Integrator ?
A practical integrator is an electronic circuit that performs the mathematical operation of integration on an input signal over time. It typically consists of an operational amplifier (op-amp)
configured with a capacitor in the feedback loop and a resistor connected to the input. As the input signal varies, the capacitor charges or discharges, causing the output voltage of the integrator
to change continuously, representing the integral of the input signal with respect to time. The output voltage of the integrator is proportional to the accumulated area under the input waveform,
making it useful for applications such as signal conditioning, waveform shaping, and frequency response analysis in electronic systems.
Practical Integrator Circuit
Practical Integrator Circuit
Working of Practical Integrator
A practical integrator operates by continuously accumulating the input signal over time to produce an output voltage proportional to the integral of the input signal. It typically consists of an
operational amplifier (op-amp) configured as an inverting amplifier with a capacitor in the feedback loop and a resistor connected to the input. When an input voltage is applied, the op-amp amplifies
the voltage across the resistor, causing a current to flow into or out of the capacitor. Since the capacitor’s voltage changes gradually according to the integral of the input voltage, the output
voltage of the integrator ramps up or down over time. This behavior effectively integrates the input signal over time, making the integrator output voltage proportional to the accumulated area under
the input waveform. The time constant of the circuit, determined by the values of the resistor and capacitor, determines the rate at which the input signal is integrated.
Characteristics of Practical Integrator
Characteristic Description
Configuration Typically consists of an operational amplifier (op-amp) configured as an inverting amplifier with a capacitor in the feedback loop and a resistor connected to the input.
Integration Performs the mathematical operation of integration on an input signal over time, producing an output voltage proportional to the integral of the input signal.
Output Response The output voltage ramps up or down over time, representing the accumulated area under the input waveform.
Time Constant The rate at which the input signal is integrated is determined by the time constant of the circuit, which is determined by the values of the resistor and capacitor.
Frequency Response The integrator’s frequency response is determined by the cutoff frequency, beyond which the integrator’s gain decreases with increasing frequency.
Applications Widely used in electronic systems for applications such as waveform shaping, signal processing, frequency analysis, and filtering.
Applications of Practical Integrator
1. Waveform Shaping: Practical integrators are used to shape the waveforms of input signals by integrating them over time. This is useful in applications such as audio processing, where integrators
can be used to generate smooth changes in volume or tone.
2. Signal Processing: Integrators are used in signal processing circuits to perform tasks such as differentiation, integration, and filtering. In particular, they are useful for extracting
low-frequency components from input signals or removing high-frequency noise.
3. Frequency Analysis: Integrators are employed in frequency analysis applications to measure the area under a signal waveform, which can provide information about the signal’s frequency content or
energy distribution.
4. Control Systems: Integrators are used in control systems to perform tasks such as calculating the integral of error signals in feedback loops. This helps in regulating system performance and
5. Instrumentation: Integrators are used in instrumentation circuits for applications such as measuring the total charge or energy in a signal waveform. This is useful in scientific experiments,
data acquisition systems, and measurement instruments.
6. Filters: Integrators are used as building blocks in filter circuits, where they are combined with other components to create low-pass, high-pass, band-pass, or band-stop filters. Integrators are
particularly useful in active filter designs, where they provide precise control over filter characteristics.
|
{"url":"https://nerdsdostuff.com/electronic_circuits/practical-integrator-circuit-and-working/","timestamp":"2024-11-13T11:30:33Z","content_type":"text/html","content_length":"132241","record_id":"<urn:uuid:558fda44-0bfe-417c-bc60-54eaf916132f>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00431.warc.gz"}
|
Fun with Circles: Chords, Secants, and Tangents Worksheet Answers (Geometry) as PDF - Knowunity
This document covers arc and angle measures formed by chords, secants and tangents. It provides a series of geometry problems focusing on circle theorems, including inscribed angles, intercepted
arcs, and angles formed by various line segments in relation to circles. The worksheet challenges students to apply their knowledge of circle geometry to solve complex problems.
|
{"url":"https://knowunity.com/knows/geometry-geometry-arcs-and-angles-4afd01b0-ea02-4852-b449-7db2a66d8aed?utm_content=taxonomy","timestamp":"2024-11-07T15:28:58Z","content_type":"text/html","content_length":"366538","record_id":"<urn:uuid:65a673de-8d47-4b70-81bf-478b132ca060>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00099.warc.gz"}
|
A recursive formula for the product of element orders of finite abelian groups
A recursive formula for the product of element orders of finite abelian groupsArticle
Let G be a finite group and let ψ(G) denote the sum of element orders of G; later this concept has been used to define R(G) which is the product of the element orders of G. Motivated by the recursive
formula for ψ(G), we consider a finite abelian group G and obtain a similar formula for R(G).
Volume: Volume 32 (2024), Issue 1
Published on: April 4, 2023
Accepted on: February 27, 2023
Submitted on: February 27, 2023
Keywords: Finite group,Element order,Abelian group,2010 Mathematics subject classification: 20K01,2010 Mathematics subject classification: 20D60,[MATH]Mathematics [math]
|
{"url":"https://cm.episciences.org/11154","timestamp":"2024-11-09T07:39:12Z","content_type":"application/xhtml+xml","content_length":"36828","record_id":"<urn:uuid:7293651b-30e6-4872-8998-b51daeb2aa78>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00668.warc.gz"}
|
Fraction Calculator - Symbolab
About Fraction Calculator
• A fraction calculator is an efficient tool that simplifies the process of performing mathematical operations on fractions. It is an online or software-based solution that saves time and minimizes
errors that might arise when manually calculating complicated fraction problems.
• This calculator is well-equipped to perform a range of functions including adding, subtracting, multiplying, dividing, simplifying, and converting fractions. It is used widely by students and
professionals alike, especially those dealing with mathematics or tasks requiring a high level of numerical accuracy.
• To utilize this tool, users input the fractions they wish to operate on and choose the operation they would like performed (addition, subtraction, multiplication or division). In seconds, the
tool will generate results that are simplified to the lowest terms. This immediate response aids in rapid problem solving, streamlining homework, and enhancing learning experiences. Since the
fraction calculator follows established mathematical rules, it eliminates the inconsistencies that may occur from personal error, promoting higher accuracy in mathematical calculations.
• The beauty of the fraction calculator lies not just in the speed at which it can work, but also in its ability to handle mixed numbers and improper fractions. Mixed numbers are converted into
improper fractions before calculations and if the answer is an improper fraction, the tool can convert it back into a mixed number for easier interpretation.
• A fraction calculator brings tremendous aid in understanding fractions conceptually. Some of these calculators highlight the steps involved in the calculation process, hence aiding at grasping
the fundamental mathematical principles being applied. Thus, while delivering correct answers, they also help elucidate often complex relationships between numbers and mathematical operations.
• Having this tool as part of mobile applications and web platforms has made it increasingly accessible to people globally. Besides, they are user-friendly, providing easy-to-understand
instructions and interfaces that even children can navigate comfortably.
• The role of a fraction calculator extends beyond academic uses. It is valuable for professionals such as engineers, architects, and developers who frequently deal with numerical data in their
work. Furthermore, being proficient at handling fractions is a crucial life skill that can come in handy in various real-life situations, such as cooking, carpentry, or managing finances.
• Although a fraction calculator simplifies mathematical operations and reduces chances of error, it is essential not to become overly reliant on it, given the importance of developing mental math
and problem-solving skills. The tool should be used as a means of verifying calculations, quick problem-solving, and understanding the process of handling fractions.
• In essence, a fraction calculator is beneficial to individuals across different age groups, occupations, and proficiency levels in numeracy. It enhances conceptual understanding, boosts
efficiency, ensures high accuracy, and serves as a practical aid in many life situations. This transformational tool democratises access to learning, enabling everyone to gain competency in
fractions regardless of their mathematical background.
|
{"url":"https://zt.symbolab.com/calculator/math/fraction","timestamp":"2024-11-14T10:16:38Z","content_type":"text/html","content_length":"261538","record_id":"<urn:uuid:fcfd954e-597c-4e60-b948-60c63925724a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00391.warc.gz"}
|
Let's Make Squares Activity | Math = Love
Let’s Make Squares Activity
This blog post contains Amazon affiliate links. As an Amazon Associate, I earn a small commission from qualifying purchases.
Several years ago, I heard about the Let’s Make Squares activity from an OKMath Newsletter sent out by Christine Koerner. She recommended Let’s Make Squares as a great activity for the first week of
I had already decided that I was going to do the 2s to 9s Challenge as my first day of school activity, but I was intrigued by the square activity. I wrote a note in my Google Keep to investigate
this activity, but I never did anything with it until the other day.
It turns out the Let’s Make Squares activity was from a book that I already had on my shelf – Cooperative Learning by Dr. Spencer Kagan. I actually have an older edition of the book (1994 edition).
Newer versions of the book appear to have more copy-friendly blackline masters. Since my edition didn’t, I decided to type up my own set of directions to give my students.
For the Let’s Make Squares Activity, students will need to be in groups of 4. Each group member needs three pieces (I used jumbo popsicle sticks) in a unique color from the rest of the group. If you
don’t have access to colored popsicle sticks, you can also use strips of colored paper.
Groups must work together to use all 12 sticks/strips to form various numbers of squares. Each teammate is only allowed to touch/move their color of sticks/strips. This encourages groups to work
together and communicate as a group as they work through the various levels of the puzzle.
Pieces must lay flat on the table. They cannot be folded, bent, torn, or broken in any way. This is one of the reasons I chose to use colored popsicle sticks instead of colored paper.
I included a diagram of what moves are allowed/not allowed in the process of making squares.
Sticks are always allowed to cross. They are never allowed to be stacked on top of one another or be arranged so sticks are touching one another along the long edge.
The hardest rule for students to follow is that “extras” are not allowed. This means that every stick must contribute to the making of squares.
Each group gets a bag of popsicle sticks (or colored paper strips), a set of instructions, and a recording sheet.
The recording sheet is my own addition to this activity. I decided I wanted a way for students to record their solutions as they found them instead of having to wait for me to check each of their
solutions before moving on.
Plus, it gives students a way to keep track of which numbers of squares they have found and which ones they still need to find.
All twelve puzzles have a solution!
I cut some 11 inch strips from some colored cardstock to make a set of demonstration pieces to use with students on my dry erase board.
I added disc magnets to the back of each piece.
I show them that sticks are allowed to cross.
I also demonstrate the actions that are not allowed.
Extras. For example, this would not count as one square since all 12 pieces are not contributing to the square.
Still not sure what I mean about making squares? Check out some action shots I took when I tested this activity with my senior statistics students during the last bit of the school year.
There was much debate over whether this next picture contained 4 or 5 squares! When students decided that it was actually 5 squares, this opened up an entire new window of possibilities for them.
Realizing that they could also overlap squares was another huge realization.
This Let’s Make Squares activity resulted in some great communication and collaboration. There were many exciting lightbulb moments to witness! I’m excited to use it as a first week of school
activity with my students this coming year!
More Fun Activities for the First Week of School
117 Comments
1. Hi Sarah. Thank you for this!! My students are LOVING it. Do you have the solutions to all 12 squares? I would love to see how you figured out 8, 9.
1. Just emailed you, Anita!
2. Could I have the solution for 8, 9, and 12 squares?
1. Just emailed you, DeeDee!
1. Could I get the answer key as well?
1. Emailing the answer key to you. I have also added it to the downloads section.
3. LOVED this activity! Team building, geometry, problem solving, we couldn’t get enough. Can you please email me the solutions? vannessc09@gmail.com I have student pictures if you would like to
1. Sending you an email, Caroline!
4. Thank you for the wonderful resource. Would you mind emailing me the answer key as well?
1. Emailing you. I’ve also added it to the downloads section.
5. Could I please get your solutions? We are baffled by a few of them. Thanks!
1. Sending you an email! I have also added the solutions to the downloads section.
6. Hi Sarah. Am I able to also get the solutions? Thank you!
1. Emailing you. I also added it to the downloads section.
7. Thank you for this! Would it be possible to get the answers? My email address is: chloeyazdani28@gmail.com
1. Emailing you! I have also added them to the downloads section.
8. Yikes I would really appreciate the solutions sheet. Great activity – thank you.
1. Emailing you, Linda! I have also added the solutions to the downloads section.
9. Thanks for your Mahi (Maori word for work). Love the ideas. 1a.m. in Auckland NZ. we in lockdown but will use this when we get back to school. Can’t think right now. Can I get solutions please?
1. Emailing them to you! Hope you are out of lockdown soon!
10. Also, wondering if the outer most square counts as one square when dividing the square up.
1. Yes, all the squares count, no matter the size.
11. Could I get solutions to 8, 9, and 12? Thank you!
1. Sending you an email. I have also added the solutions to the downloads section.
12. Hi Sarah
Can you send the answer key to me? Can’t wait to have the middle school students give this a try.
Thank you
1. Emailing you, Michele! I have also added the solutions to the downloads section.
13. Can we get a solutions page? We are struggling with 9.
1. Emailing you! For anyone else, I have also added it as one of the downloads.
14. hi! my kids are struggling with some of these; could I see possible solutions? the kids are LOVING this though!!
1. Sending you an email. I also added the solutions to the downloads section.
15. Would love the solutions as well!! shannonkraegel@live.com
1. Emailing you. I have also posted the solutions as one of the downloads.
16. Hi Sarah, I am also unable to come up with the 8, 9, 12 solutions for this puzzle.
1. I just emailed them to you. I also uploaded them to the downloads section.
17. I would love an answer key. Please and thank you!
1. Just sent you an email. The answer key can also be found in the downloads section above.
18. Could I please get the answer sheet to the let’s make squares game?
1. Emailed!
19. Solutions please? lisacook@summerlandbobcats.org
1. Emailed you, Lisa!
20. Solutions, Please and Thank you!!
1. I just sent them your way!
|
{"url":"https://mathequalslove.net/lets-make-squares/comment-page-2/","timestamp":"2024-11-08T09:27:50Z","content_type":"text/html","content_length":"422623","record_id":"<urn:uuid:3b821c2f-ae14-4846-baf6-3e9a96c658f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00155.warc.gz"}
|
Google Sheets Average: How to Calculate and Use Averages in Google Sheets
Last Modified: February 22, 2024 - 4 min read
Hannah Recker
In the realm of spreadsheet applications, Google Sheets stands out for its functionality and ease of use, especially when it comes to calculating averages.
The AVERAGE function in Google Sheets is a fundamental tool for professionals like financial analysts and data scientists, enabling quick and accurate computation of mean values from large data sets.
This blog will explore how to use the Google Sheets AVERAGE function.
Understanding Google Sheets Average
Basic Concept
The AVERAGE function in Google Sheets is used to calculate the average of a range of numbers. This function takes one or more arguments, which can be either individual numbers or cell references that
contain numbers. It then adds up all the numbers and divides the sum by the total number of values in the range.
For example, if you have a range of cells A1 that contains the values 10, 20, 30, 40, and 50, you can use the AVERAGE function to calculate the average of these values.
The function syntax would be =AVERAGE(A1), and the result would be 30.
Function Syntax
The syntax of the AVERAGE function in Google Sheets is straightforward. The function takes one or more arguments, separated by commas, and returns the average of the values in the range.
The basic syntax of the AVERAGE function is as follows:
=AVERAGE(value1, [value2, …])
Here, value1 is the first value or range of values that you want to calculate the average of. You can add additional values or ranges by separating them with commas.
For example, to calculate the average of the values in cells A1 through A5 and B1 through B5, you would use the following syntax:
=AVERAGE(A1:A5, B1:B5)
It is important to note that the AVERAGE function only works with numerical values. If you try to use the function with non-numerical values, such as text or blank cells, the function will return an
Application of Google Sheets Average
Google Sheets Average is a useful tool for calculating the average of a range of numbers in a Google Sheets spreadsheet. It is a simple and easy-to-use function that can be used in a variety of
real-world scenarios.
Real World Scenarios
Google Sheets Average can be used in a variety of real-world scenarios. For example, a teacher might use it to calculate the average grade of a class, or a business owner might use it to calculate
the average revenue for a particular month.
Advanced Usage
Google Sheets Average can also be used in more advanced scenarios.
For example, it can be used in conjunction with other functions like AVERAGEIF and AVERAGEIFS to calculate the average of a range of numbers based on certain criteria.
Scenario: Finding the average score in a class but only for students who have passed (score above 50).
Using the formula ‘=AVERAGEIF(B2:B6, “>50”)’ calculates the average score for students who scored more than 50.
Free AI-Powered Tools Right Within Your Spreadsheet
Supercharge your spreadsheets with GPT-powered AI tools for building formulas, charts, pivots, SQL and more. Simple prompts for automatic generation.
Furthermore, Google Sheets Average can be used to calculate weighted averages.
This is useful when calculating the average of a set of numbers where some numbers are more important than others. To calculate a weighted average, the user needs to multiply each number by its
weight and then divide the sum of the products by the sum of the weights.
Use AI to Generate Average Formulas
You can use Coefficient’s free Formula Builder to automatically create the formulas in this first example. To use Formula Builder, you need to install Coefficient. The install process takes less than
a minute.
You can get started for free right from our website.
Search for “Coefficient”. Click on the Coefficient app in the search results.
Accept the prompts to install. Once the installation is finished, return to Extensions on the Google Sheets menu. Coefficient will be available as an add-on.
Now launch the app. Coefficient will run on the sidebar of your Google Sheet. Select GPT Copilot on the Coefficient sidebar.
Then click Formula Builder.
Type a description of a formula into the text box. For this example, type: Calculate the average score in column B if the score is above 50.
Then press ‘Build’. Formula Builder will automatically generate the formula from the first example.
In conclusion, Google Sheets Average is a simple and easy-to-use tool that can be used in a variety of real-world scenarios. It can also be used in more advanced scenarios when combined with other
To further boost your data analysis skills by using GPT, consider Coefficient’s Google Sheets extension. Install Coefficient for free today and see how easy it makes business data analysis.
Try the Spreadsheet Automation Tool Over 500,000 Professionals are Raving About
Tired of spending endless hours manually pushing and pulling data into Google Sheets? Say goodbye to repetitive tasks and hello to efficiency with Coefficient, the leading spreadsheet automation tool
trusted by over 350,000 professionals worldwide.
Sync data from your CRM, database, ads platforms, and more into Google Sheets in just a few clicks. Set it on a refresh schedule. And, use AI to write formulas and SQL, or build charts and pivots.
Hannah Recker Growth Marketer
Hannah Recker was a data-driven growth marketer before partying in the data became a thing. In her 12 years experience, she's become fascinated with the way data enablement amongst teams can truly
make or break a business. This fascination drove her to taking a deep dive into the data industry over the past 4 years in her work at StreamSets and Coefficient.
500,000+ happy users
Wait, there's more!
Connect any system to Google Sheets in just seconds.
Get Started Free
Trusted By Over 50,000 Companies
|
{"url":"https://coefficient.io/google-sheets-tutorials/google-sheets-average","timestamp":"2024-11-11T23:48:41Z","content_type":"text/html","content_length":"75429","record_id":"<urn:uuid:b993f6ac-198b-44c7-bf4b-f59081871e6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00073.warc.gz"}
|
Comparing Different Species of Cross-Validation
random forest
This is the first of two posts about the performance characteristics of resampling methods. I just had major shoulder surgery, but I’ve pre-seeded a few blog posts. More will come as I get better at
one-handed typing.
First, a review:
• Resampling methods, such as cross-validation (CV) and the bootstrap, can be used with predictive models to get estimates of model performance using the training set.
• These estimates can be made to tune the model or to get a good sense of how the model works without touching the test set.
There are quite a few methods for resampling. Here is a short summary (more in Chapter 4 of the book):
• k-fold cross-validation randomly divides the data into k blocks of roughly equal size. Each of the blocks is left out in turn and the other k-1 blocks are used to train the model. The held out
block is predicted and these predictions are summarized into some type of performance measure (e.g. accuracy, root mean squared error (RMSE), etc.). The k estimates of performance are averaged to
get the overall resampled estimate. k is 10 or sometimes 5. Why? I have no idea. When k is equal to the sample size, this procedure is known as Leave-One-Out CV. I generally don’t use it and
won’t consider it here.
• Repeated k-fold CV does the same as above but more than once. For example, five repeats of 10-fold CV would give 50 total resamples that are averaged. Note this is not the same as 50-fold CV.
• Leave Group Out cross-validation (LGOCV), aka Monte Carlo CV, randomly leaves out some set percentage of the data B times. It is similar to min-training and hold-out splits but only uses the
training set.
• The bootstrap takes a random sample with replacement from the training set B times. Since the sampling is with replacement, there is a very strong likelihood that some training set samples will
be represented more than once. As a consequence of this, some training set data points will not be contained in the bootstrap sample. The model is trained on the bootstrap sample and those data
points not in that sample are predicted as hold-outs.
Which one should you use? It depends on the data set size and a few other factors. We statisticians tend to think about the operating characteristics of these procedures. For example, each of the
methods above can be characterized in terms of their bias and precision.
Suppose that you have a regression problem and you are interested in measuring RMSE. Imagine that, for your data, there is some “true” RMSE value that a particular model could achieve. The bias is
the difference between what the resampling procedure estimates your RMSE to be for that model and the true RMSE. Basically, you can think of it as accuracy of estimation. The precision measures how
variable the result is. Some types of resampling have higher bias than others and the same is true for precision.
Imagine that the true RMSE is the target we are trying to hit and suppose that we have four different types of resampling. This graphic is typically used when we discuss accuracy versus precision.
Generally speaking, the bias of a resampling procedure is thought to be related to how much data is held out. If you hold-out 50% of your data using 2-fold CV, the thinking is that your final RMSE
estimate will be more biased than one that held out 10%. On the other hand, the conventional wisdom is that holding less data out decreases precision since each hold-out sample has less data to get a
stable estimate of performance (i.e. RMSE).
I ran some simulations to evaluate the precision and bias of these methods. I simulated some regression data (so that I know the real answers and compute the true estimate of RMSE). The model that I
used was random forest with 1000 trees in the forest and the default value of the tuning parameter. I simulated 100 different data sets with 500 training set instances. For each data set, I also used
each of the resampling methods listed above 25 times using different random number seeds. In the end, we can compute the precision and average bias of each of these resampling methods.
I won’t show the distributions of the precision and bias values across the simulations but use the median of these values. The median represents the distributions well and are simpler to visualize.
Question 1a and 1b: How do the variance and bias change in basic CV? Also, Is it worth repeating CV?
First, let’s look at how the precision changes over the amount of data held-out and the training set size. We use the variance of the resampling estimates to measure precision.
First, a value of 5 on the x-axis is 5-fold CV and 10 is 10-fold CV. Values greater than 10 are repeated 10-fold (i.e. a 60 is six repeats of 10-fold CV). For on the left-hand side of the graph (i.e.
5-fold CV), the median variance is 0.019. This measures how variable 5-fold CV is across all the simulated data sets.
What about bias? The conventional wisdom is that the bias should be better for the 10-fold CV replicates since less is being left out in those cases. Here are the results:
From this, 5-fold CV is pessimistically biased and that bias is reduced by moving to 10-fold CV. Perhaps it is within the noise, but it would also appear that repeating 10-fold CV a few times can
also marginally reduce the bias.
Question 2a and 2b: How does the amount held back affect LGOCV? Is it better than basic CV?
Looking at the leave-group-out CV results, the variance analysis shows an interesting pattern:
Visually at least, it appears that the amount held-out has a slightly a bigger influence on the variance of the results than the number of times that the process is repeated. Leaving more out buys
you better individual resampled RMSE values (i.e. more precision).
That’s one side of the coin. What about the bias?
Also, the number of held-out data sets doesn’t appear to reduce the bias.
One these results alone, if you use LGOCV try to leave a small amount out (say 10%) and do a lot of replicates to control the variance. But… why not just do repeated 10-fold CV?
We have simulations where both LGOCV and 10-fold CV left out 10%. We can do a head-to-head comparison of these results to see which procedure seems to work better. Recall that the main difference
between these two procedures is that repeated 10-fold CV splits the hold-out data points evenly within a fold. LGOCV just randomly selects samples each time. In ease case, the same training set
sample will show up in more than one of the hold-out data sets so the difference is more about configuration of samples.
Here are the variance curves:
That seems pretty definitive: all other things being equal, you gain about a log unit of precision using repeated 10-fold CV instead of LGOCV with a 10% hold-out.
The bias curves show a fair amount of noise (keeping in mind the scale of this graph compared to the other bias images above):
I would say that there is no real difference in bias and expected this prior to seeing the results. We are always leaving 10% behind and, if this is what drives bias, the two procedures should be
about the same.
So my overall conclusion, so far, is that repeated 10-fold CV is the best in terms of variance and bias. As always, caveats apply. For example, if you have a ton of data, the precision and bias of
10- or even 5-fold CV may be acceptable. Your mileage may vary.
The next post will look at:
• the variance and bias of the nominal bootstrap estimate
• a comparison of repeated 10-fold CV to the bootstrap
• the out-of-bag estimate of RMSE from the individual random forest model and how it compares to the other procedures.
EDIT: based on the comments, here is one of the simulation files. I broke them up to run in parallel on our grid but they are all the same (except the seeds). Here is the markdown file for the post
if you want the plot code or are interested to see how I summarized the results.
(This article was originally posted at http://appliedpredictivemodeling.com)
|
{"url":"https://blog.aml4td.org/posts/comparing-different-species-of-crossvalidation/","timestamp":"2024-11-04T17:02:28Z","content_type":"application/xhtml+xml","content_length":"32437","record_id":"<urn:uuid:2bb89863-112e-4110-993c-4ca3cf9639a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00711.warc.gz"}
|
How do you explain a regression model?
How do you explain a regression model?
Regression analysis is the method of using observations (data records) to quantify the relationship between a target variable (a field in the record set), also referred to as a dependent variable,
and a set of independent variables, also referred to as a covariate.
What is regression explain with example?
Regression is a statistical method used in finance, investing, and other disciplines that attempts to determine the strength and character of the relationship between one dependent variable (usually
denoted by Y) and a series of other variables (known as independent variables).
What is the use of regression analysis with example?
Regression analysis is used in stats to find trends in data. For example, you might guess that there’s a connection between how much you eat and how much you weigh; regression analysis can help you
quantify that.
How do you write a regression equation?
A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of
y when x = 0).
How do you create a regression table?
Click on the “Data” tab at the top of the Excel window and then click the “Data Analysis” button when it appears on the ribbon. Select “Regression” from the list that appears in the Data Analysis
window and then click “OK.”
What is a good R squared value?
Any study that attempts to predict human behavior will tend to have R-squared values less than 50%. However, if you analyze a physical process and have very good measurements, you might expect
R-squared values over 90%.
What method does Excel use for linear regression?
The three main methods to perform linear regression analysis in Excel are: Regression tool included with Analysis ToolPak. Scatter chart with a trendline.
How do you present multiple regression results in a table?
Still, in presenting the results for any multiple regression equation, it should always be clear from the table: (1) what the dependent variable is; (2) what the independent variables are; (3) the
values of the partial slope coefficients (either unstandardized, standardized, or both); and (4) the details of any test of …
How do you analyze a regression table?
21:40Suggested clip 119 secondsHow to interpret regression tables – YouTubeYouTubeStart of suggested clipEnd of suggested clip
How do you interpret multiple regression in SPSS?
8:41Suggested clip 65 secondsInterpreting Output for Multiple Regression in SPSS – YouTubeYouTubeStart of suggested clipEnd of suggested clip
What is regression in SPSS?
Introduction. Multiple regression is an extension of simple linear regression. It is used when we want to predict the value of a variable based on the value of two or more other variables. The
variable we want to predict is called the dependent variable (or sometimes, the outcome, target or criterion variable).
What is multiple regression in research?
Multiple regression is a general and flexible statistical method for analyzing associations between two or more independent variables and a single dependent variable. Multiple regression is most
commonly used to predict values of a criterion variable based on linear associations with predictor variables.
What is the difference between simple and multiple regression?
It is also called simple linear regression. It establishes the relationship between two variables using a straight line. If two or more explanatory variables have a linear relationship with the
dependent variable, the regression is called a multiple linear regression. …
What are some applications of multiple regression models?
Multiple regression models are used to study the correlations between two or more independent variables and one dependent variable. These would be useful when conducting research where two possible
independent variables could affect one dependent variable.
What is regression and its application?
Regression is a statistical tool used to understand and quantify the relation between two or more variables. Regressions range from simple models to highly complex equations. The two primary uses for
regression in business are forecasting and optimization.
What are some real life examples of regression?
A simple linear regression real life example could mean you finding a relationship between the revenue and temperature, with a sample size for revenue as the dependent variable. In case of multiple
variable regression, you can find the relationship between temperature, pricing and number of workers to the revenue.
|
{"url":"https://www.kingfisherbeerusa.com/how-do-you-explain-a-regression-model/","timestamp":"2024-11-13T07:33:42Z","content_type":"text/html","content_length":"48054","record_id":"<urn:uuid:bab907ef-42ae-469f-a47f-a0bf38bf3015>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00617.warc.gz"}
|
s and
Chris Daw
• Research Division Lead
• LMS Correspondent
Room 106
Building location
Areas of interest
• Arithmetic geometry
• Number theory
• Model theory
• Ergodic theory
• Algebraic groups
• Unlikely intersections in Shimura varieties
Current teaching (2024/25):
Research centres and groups
Number Theory
Research projects
My research focuses on certain mathematical objects known as Shimura varieties. Shimura varieties arise in many guises, but they may be most familiar as the parameter spaces of abelian varieties.
Abelian varieties are central objects in number theory — the simplest abelian varieties, known as elliptic curves, are now at the heart of modern cryptography. For this reason, and others, questions
regarding the geometry of Shimura varieties can inspire and illuminate many questions in number theory and beyond.
2014 - 2016 Hodge Fellow, Institut des Hautes Etudes Scientifiques
2016 - 2017 EPSRC Postdoctoral Research Assistant, University of Oxford
2016 - 2017 Junior Research Fellow, Linacre College, Oxford
2017 - 2018 Academic Visitor, Mathematical Institute, University of Oxford
2018 - Visiting Research Fellow, Mathematical Institute, University of Oxford
Academic qualifications
MSci Mathematics, University College London (2010)
PhD Mathematics, University College London (2014)
Professional bodies/affiliations
Member, London Mathematical Society
|
{"url":"https://www.reading.ac.uk/maths-and-stats/staff/chris-daw","timestamp":"2024-11-08T08:30:07Z","content_type":"text/html","content_length":"58527","record_id":"<urn:uuid:e7c5ac9f-1997-4288-924a-1de91fb5658a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00267.warc.gz"}
|
Retro Game Internals: Punch-Out Passwords
The NES game “Mike Tyson’s Punch-Out” uses a password system to allow players to continue from certain points in the game. Each password consists of 10 digits, each of which can be any number from 0
to 9. There are 2 kinds of passwords that the game will accept, which I’ll call “regular” and “special” passwords. Special passwords are specific sets of 10 digits that the game looks for and reacts
to in a unique way when they are entered. The complete list of special passwords is as follows:
• 075 541 6113 – Busy signal sound 1
• 800 422 2602 – Busy signal sound 2
• 206 882 2040 – Busy signal sound 3
• 135 792 4680 – Play hidden circuit: “Another World Circuit” (must hold Select button and press A + B to accept password)
• 106 113 0120 – Shows credits (must hold Select button and press A + B to accept password)
• 007 373 5963 – Takes you to fight against Mike Tyson
The other kind of passwords that the game accepts are the regular passwords. Regular passwords are a coded representation of the progress that you’ve made in the game. The game data that is encoded
into a regular password includes:
• Number of career wins
• Number of career losses
• Number of wins by knockout
• Next opponent to fight
Encoding Passwords
We’ll use an example game with 24 wins, 1 loss, 19 knockouts and starting at the world circuit title fight against Super Macho Man to see how passwords are generated.
The process of encoding this game state into a password begins by collecting your number of wins, losses and KOs into a buffer. The game represents each number in binary-coded decimal form with 8
bits per digit and 2 digits for each value. So for our 24 wins, that’s one byte with the value 2, and a second byte with the value 4. Then comes the pair of bytes for the number of losses and one
more pair for KOs for a total of 6 bytes of data. The diagram below shows these 6 bytes with their values in both decimal and binary on the bottom.
The next step is to generate a checksum over those 6 bytes. The checksum byte is calculated by adding the 6 individual bytes together and then subtracting the result from 255. In our case we have 2 +
4 + 0 + 1 + 1 + 9 = 17 and then 255 – 17 = 238.
Next we shuffle some of the bits from the 6 bytes into a new buffer. We can think of the new buffer as a single 28-bit intermediate value that we’ll begin to fill in piece by piece. The bits from the
first buffer are split into groups of 2 and moved around to different hard coded positions in the second. This is the first of a few steps whose sole job is to obfuscate the data make it difficult
for players to see how passwords are generated.
Notice that not all of the bits from the original buffer are transferred into the new intermediate buffer. These bits are ignored because they are known to always be 0. Your number of losses in
particular only needs to contribute 2 bits of information to the password because of the rules of the game. If your total number of losses ever gets up to 3, you’ll get a game over and never get a
password. Therefore, you only need to be able to represent the numbers 0, 1 and 2 for your number of losses and that requires only 2 bits.
Next we shuffle more bit pairs into the intermediate buffer. The first four pairs come from the checksum value that we computed earlier. The other pair of bits comes from the opponent value. The
opponent value is a number that tells which opponent you’re going to fight next after you enter your password. There are three possible opponent values that can be used:
0 – DON FLAMENCO (first fight of major circuit)
1 – PISTON HONDA (first fight of world circuit)
2 – SUPER MACHO MAN (last fight of world circuit)
Since we wanted to generate a password that puts us at Super Macho Man, we’ll use 2 for our opponent value. The checksum and opponent value bits are then shuffled into the intermediate bits as
The next step is to perform some number of rotations to the left on the intermediate bits. A single leftward rotation means that all of the bits move one place to the left, and the bit that used to
be the leftmost bit rotates all the way around to become the new rightmost bit. To calculate the number of times we’ll rotate the bits to the left, we take the sum of the opponent value and our
number of losses, add 1, and take the remainder of that result divided by 3. In our case we have 2 + 1 + 1 = 4. Then the remainder of 4/3 is 1 so we’ll rotate the intermediate bits to the left 1
At this point the intermediate bits are thoroughly scrambled and now it’s time to start breaking them apart to come up with the digits that make up our password. Passwords need 10 digits so we’re
going to break our 28 intermediate bits into 10 separate numbers that we’ll call password values P0, P1, P2, etc. The first nine password values will get 3 bits of data each, with the final value
getting just 1 of the intermediate bits. To round out the final password value, we will also include bits that represent the number of rotations we performed in the previous step.
Finally we add a unique hard coded offset to the password value in each position. The final password digit is then the remainder of that sum divided by 10. For example in the seventh position we use
an offset of 1, so we have 5 + 1 = 6, and then the final digit will be the remainder of 6/10 which is 6. In the fourth position the offset we use is 7, so we have 5 + 7 = 12, and then the final digit
is the remainder of 12/10 which is 2.
Now we have the final password digits which we can try out in the game.
Decoding Passwords
The process of decoding a password back into the numbers of wins/losses/KOs and the opponent value is a straight forward reversal of all the steps outlined above and is left as an exercise for the
reader. There are two notable mistakes however that the game makes when decoding and verifying player supplied passwords.
The first mistake happens in the very first step of decoding a password which would be to subtract out the offsets to get back to the password values. The original password values contained 3 bits of
data each, which means their values before the offsets were applied must have all been in the range 0-7. However, a player might supply a password that results in a password value of 8 or 9 after the
offset is subtracted out (modulo 10.) Instead of rejecting such a password immediately, the game fails to check for this case and instead allows the extra bit of data in the password value to pollute
the collection of intermediate bits in such a way that passwords are no longer unique. Because certain intermediate bits could have either been set by the password digit that they correspond to OR
the extra bit of a neighboring password value, there are multiple passwords that can now map back to the same set of intermediate bits. This is why you can find different passwords that give you the
same in-game result, where as they would have been unique otherwise.
The second mistake is a bug in the logic the game uses to try to validate the data after decoding the password. The conditions that the game tries to enforce are:
• checksum that is stored in the password matches what the checksum should be given the win/loss/KO record stored in the password
• loss value is 0, 1 or 2
• opponent value is 0, 1, or 2
• rotate count stored in the password is the correct rotate count given the loss value and opponent value that were stored in the password
• all digits of win/loss/KO record stored in the password are in the range 0-9
• wins is >= 3
• wins is >= KOs
If any of these conditions are not met, the game intended to reject the password. However, there is a bug in the implementation of the final check (specifically when comparing BCD encoded numbers.)
Instead of actually checking for wins >= KOs, the game will allow cases where the hi digit of wins is 0, the lo digit of wins is >= 3 and the hi digit of KOs is less than the lo digit of wins. For
example, a record of 3 wins, 0 losses and 23 KOs will be accepted by the game (as demonstrated by the password 099 837 5823) when this was intended to be rejected (since you can’t have won 23 fights
by knockout if you’ve only won 3 fights total.)
The particular details of this encoding scheme are unique to Punch-Out, but the general idea of taking some important bits of game state, transforming them in a reversible way to obfuscate their
relationship to the original game state, and then using them to generate some number of symbols to show to the player as a password is fairly universal. Checksums can be used to make sure that
accidental random changes to the password (if your finger slips while entering it) are most likely to result in an invalid password instead of some other password representing a random other game
If you’d like to offer feedback or keep track of when new posts in this series become available, you can follow me on twitter @allan_blomquist
(This is part 1 of a series – Next)
Tags: Allan Blomquist, Retro Game, Retro Game Internals, Tomorrow Corporation
Different combinations of wins, losses, and kos can create the checksum, right? If so, is that fine? Doesn’t that defeat the purpose of checksum?
• @GK the checksum has to validate the rest of the password, it helps prevent random manipulation of the password. In order for the checksum to validate it would need to be the correct value for
those KOs, Ws and Ls. Any given checksum should not validate any given password, it has to be the checksum calculated for that password. It’s safe to assume the average user is not going to be
able to generate that checksum by accident or through arbitrary manipulation of the password.
Can’t wait to see you tackle Metroid’s password system! Amazing read, seriously loved it!
|
{"url":"https://tomorrowcorporation.com/posts/retro-game-internals-punch-out-passwords","timestamp":"2024-11-03T04:29:11Z","content_type":"application/xhtml+xml","content_length":"37893","record_id":"<urn:uuid:80834cba-c2f5-4127-b06d-02c2dbc4c5e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00344.warc.gz"}
|
Kinetic Energy
Kinetic energy is the energy an object possesses due to its motion. It is a fundamental concept in physics and is used to describe the behavior of objects in motion. Understanding kinetic energy is
crucial for many fields, including engineering, mechanics, and physics.
Definition: Kinetic energy is defined as the energy an object possesses due to its motion. The formula for calculating kinetic energy is KE = 1/2mv^2, where KE is the kinetic energy, m is the mass of
the object, and v is its velocity. The unit of kinetic energy is joules (J).
Types of Kinetic Energy: There are many different types of kinetic energy, depending on the type of motion an object is undergoing. Some examples include translational kinetic energy, rotational
kinetic energy, and vibrational kinetic energy. Each type of kinetic energy is important for understanding different types of motion and is used in a variety of applications, from robotics to energy
Conservation of Kinetic Energy: According to the law of conservation of energy, energy cannot be created or destroyed, only transferred from one form to another. This means that the total amount of
kinetic energy in a system remains constant, even as it is transferred between objects or transformed into other forms of energy. Understanding the conservation of kinetic energy is important for
understanding the behavior of systems in motion, such as collisions between objects.
Applications of Kinetic Energy: Kinetic energy is used in a variety of applications, from transportation to energy generation. For example, the kinetic energy of a moving vehicle is used to power its
motion, while the kinetic energy of wind is used to generate electricity through wind turbines. Understanding the principles of kinetic energy is crucial for designing and optimizing these systems
for maximum efficiency and performance.
Calculating Kinetic Energy: The formula for calculating kinetic energy is straightforward and involves simple multiplication and exponentiation. However, the accuracy of the calculation depends on
the accuracy of the measurements of mass and velocity. Careful measurement and calculation are necessary to accurately determine the kinetic energy of an object or system.
In summary, kinetic energy is an important concept in physics that describes the energy of an object in motion. Understanding the types, conservation, and applications of kinetic energy is crucial
for many fields and applications. By carefully measuring and calculating kinetic energy, scientists and engineers can design and optimize systems for maximum efficiency and performance.
|
{"url":"https://learnindex.com/kinetic-energy/","timestamp":"2024-11-15T03:47:40Z","content_type":"text/html","content_length":"108062","record_id":"<urn:uuid:8e45eaa8-a7ff-49cb-aa6d-e725f6fba95e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00529.warc.gz"}
|
How do you simplify $\
Hint: To solve such questions that include surds and indices, a basic knowledge of the rules of surds is necessary. The question can be solved by evaluating the expression under the root and then by
further simplifying the expression to get the final answer.
Complete step by step answer:
The given expression to simplify is $\dfrac{{\sqrt 4 }}{{36}}$ $...(i)$
Now we know that ${(2)^2} = 2 \times 2 = 4$ therefore,
$\sqrt 4 = {(4)^{1/2}}$
Which on further simplification gives us
${(4)^{1/2}} = {({2^2})^{1/2}}$
Applying the law of exponents that states ${({a^m})^n} = {a^{m \times n}}$ to the above expression,
${(4)^{1/2}} = {2^{2 \times \dfrac{1}{2}}}$
On simplifying the powers of $2$ we get
$\Rightarrow \sqrt 4 = 2$ $...(ii)$
Now substituting the value of the equation $(ii)$ in the equation $(i)$ we get,
$\Rightarrow \dfrac{{\sqrt 4 }}{{36}} = \dfrac{2}{{36}}$
Dividing and simplifying the above expression to get
$\Rightarrow \dfrac{1}{{18}}$
Hence, on simplifying $\dfrac{{\sqrt 4 }}{{36}}$ we get $\dfrac{1}{{18}}$
Therefore, $\dfrac{{\sqrt 4 }}{{36}} = \dfrac{1}{{18}}$
Additional Information:
A surd can be defined as an irrational number that can be expressed with roots, such as $\sqrt 2$ or $\sqrt[4]{{15}}$ . An index on the other hand can be defined as the power, or exponent, of a
number. For example, ${2^3}$ has an index of $3$ . When we deal with exponents or powers of a number, a root often refers to a number that is repeatedly multiplied by itself a certain fixed number of
times to get another number. A radical number can be written as shown below:
$\sqrt[n]{x}$ , where $n$ is the degree, and the root sign is known as the radical sign. The value $x$ is known as the radicand and the expression as a whole is known as the radical.
Note: While solving these types of questions it always proves extremely helpful if students remember the fundamental rules of surds and exponents. Some of the rules such as ${({a^m})^n} = {a^{m \
times n}}$ and $\sqrt {a \times b} = \sqrt a \times \sqrt b$ are used a lot of times and help to simplify the question to a great extent.
|
{"url":"https://www.vedantu.com/question-answer/how-do-you-simplify-dfracsqrt-4-36-class-10-maths-cbse-600a4ce92369370038904daf","timestamp":"2024-11-08T17:39:25Z","content_type":"text/html","content_length":"166016","record_id":"<urn:uuid:419c865b-d8e4-45ee-afc6-19ce865fde34>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00679.warc.gz"}
|
Re: Word/MathType to FrameMaker/MathML Equation
Hello all,
I will be using unstructured FrameMaker 15.0.6.956 for the project below.
I am working on a document in Word that I will be moving to FrameMaker a little later in the process. (The original doc is in Word and there are multiple writers who are still working in the Word
version using Master Docs.) The document is an engineering textbook that uses several hundred equations, for which MathType is used in the Word version. I will be moving it to FrameMaker for the ease
of formatting, indexing, styles, cross-references, etc.
I have never used the MathML Equation feature in FrameMaker, but I see that it seems very similar to MathType--a kind of MathType for FrameMaker, maybe? Anyway, I am wondering if MathType will be
importable into the FrameMaker doc. It doesn't seem to be copy/pastable, and I'm concerned that I may need to recreate every equation in the Frame version.
Thank you!
re: …no way to do that other than as image files…
And that can be done with preservation of typographic quality. If the Word doc is available as a PDF, Illustrator can open single pages, isolate individual equations, deleting everything else, shrink
the artboard to equation extents, and then save it as SVG. The SVG preserves the text as text and any non-text elements as vector art.
As you would need some way to organize them, and don't want to re-do all of them every time the .doc/x is revised
Jump to answer
|
{"url":"https://community.adobe.com/t5/framemaker-discussions/word-mathtype-to-framemaker-mathml-equation/m-p/12510657/highlight/true","timestamp":"2024-11-11T08:41:00Z","content_type":"text/html","content_length":"841622","record_id":"<urn:uuid:cb29251e-ba1d-4f2a-a0d9-481a9672daed>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00745.warc.gz"}
|
Visualizing Facebook Networks with MATLAB
When one of my guest bloggers, Toshi posted, Can We Predict a Breakup? Social Network Analysis with MATLAB, he got several questions about the new graph and network algorithms capabilities in MATLAB
introduced in R2015b. He would like to do a follow-up post to address some of those questions.
Questions from the Readers
In the comment section of my recent post about social network analysis, QC asked if there was any way to plot very large scale network (>10000 nodes) with uniform degree, and Christine Tobler kindly
provided an example and a link to a dataset collection. I would like to build on her comment in this post.
Plotting Large Scale Network in MATLAB
Can MATLAB plot large scale network with more than 10,000 nodes? Let's start by reproducing Christine's example that plots a graph with 10,443 nodes and 20,650 edges, representing an L-shaped grid.
n = 120;
A = delsq(numgrid('L',n));
G = graph(A,'OmitSelfLoops');
title('A graph with 10,443 nodes and 20,650 edges')
Facebook Ego Network Dataset
For datasets, Christine suggested the Stanford Large Network Dataset Collection. I decided to take her up on her suggestion using its Facebook dataset. This dataset contains anonymized personal
networks of connections between friends of survey participants. Such personal networks represent friendships of a focal node, known as "ego" node, and such networks are therefore called "ego"
Let's download "facebook.tar.gz" and extract its content into a "facebook" directory in the current folder. Each file starts with a node id and ends with suffix like ".circle", or ".edges". Those are
ids of the "ego" nodes. We can run loadFBdata.m to load data from those files. I will just reload the preprocessed data.
clearvars % clear workspace
% loadFBdata % run script
load facebook % or load mat file
Your variables are:
circles egofeat feat graphs
edges egoids featnames
Visualize Combined Ego Networks
Let's first combine all 10 ego networks into a graph and visualize them in a single plot. This combined network has a little fewer than 4,000 nodes but with over 84,000 edges. It also includes some
ego nodes, which means those survey participants were not entirely unrelated to one another.
comb = vertcat(edges{:}); % combine edges
comb = sort(comb, 2); % sort edge order
comb = unique(comb,'rows'); % remove duplicates
comb = comb + 1; % convert to 1-indexing
combG = graph(comb(:,1),comb(:,2)); % create undirected graph
notConnected = find(degree(combG) == 0); % find unconnected nodes
combG = rmnode(combG, notConnected); % remove them
edgeC = [.7 .7 .7]; % gray color
H = plot(combG,'MarkerSize',1,'EdgeColor',edgeC, ...% plot graph
title('Combined Ego Networks') % add title
text(17,13,sprintf('Total %d nodes', ... % add node metric
text(17,12,sprintf('Total %d edges', ... % add edge metric
text(17,11,'Ego nodes shown in red') % add edge metric
egos = intersect(egoids + 1, unique(comb)); % find egos in the graph
highlight(H,egos,'NodeColor','r','MarkerSize',3) % highlight them in red
Visualize a Single Ego Network - Degree Centrality
One of the most basic analyses you can perform on a network is link analysis. Let's figure out who are the most well connected in this graph. To make it easy to see, we can change the color by number
of connections, also known as degree, and therefore this is a metric known as degree centrality. The top 3 nodes by degree are highlighted in the plot and they all belong to the same cluster. They
are very closely connected friends!
Please note that the ego node is not included in this analysis as readme-Ego.txt says:
The 'ego' node does not appear (in the edge list), but it is assumed
that they follow every node id that appears in this file.
By nature the ego node will always be the top node, so there is no point including it.
idx = 2; % pick an ego node
egonode = num2str(egoids(idx)); % ego node name as string
G = graphs{idx}; % get its graph
deg = degree(G); % get node degrees
notConnected = find(deg < 2); % weakly connected nodes
deg(notConnected) = []; % drop them from deg
G = rmnode(G, notConnected); % drop them from graph
[~, ranking] = sort(deg,'descend'); % get ranking by degree
top3 = G.Nodes.Name(ranking(1:3)); % get top 3 node names
colormap cool % set color map
H = plot(G,'MarkerSize',log(deg), ... % node size in log scale
'NodeCData',deg,... % node color by degree
labelnode(H,top3,{'#1','#2','#3'}); % label top 3 nodes
title({sprintf('Ego Network of Node %d', ... % add title
egoids(idx)); 'colored by Degree Centrality'})
text(-1,-3,['top 3 nodes: ',strjoin(top3)]) % annotate
H = colorbar; % add colorbar
ylabel(H, 'degrees') % add metric as ylabel
Ego Network Degree Distribution
Let's check out the histogram of degrees between the ego network we just looked and the combined ego networks. People active on Facebook will have more edges than those not, but a few people have a
large number of degrees and the majority have small number of degrees, and difference is large and looks exponential.
histogram(degree(combG)) % plot histogram
hold on % don't overwrite
histogram(degree(G)) % overlay histogram
hold off % restore default
xlabel('Degrees') % add x axis label
ylabel('Number of Nodes') % add y axis label
title('Degree Distribution') % add title
legend('The Combined Ego Networks', ... % add legend
sprintf('Ego Network of Node %d',egoids(idx)))
text(150,700,'Median Degrees') % annotate
text(160,650,sprintf('* Node %d: %d', ... % annotate
text(160,600,sprintf('* Combo : %d', ... % annotate
You'll notice that median degrees seem a bit small. That's because those are from nodes included in ego networks that contain nodes that are connected to ego nodes only. So we don't see other nodes
that are not connected to the ego nodes (friends of friends). If you count the number of nodes in each ego network, you can see the degrees of each ego node, because ego nodes are supposed to be
connected to all other nodes in their respective ego networks. The median is now 1404. Is this larger or smaller than the number of your Facebook friends?
deg = cellfun(@(x) numnodes(x), graphs); % degrees of all graphs
median_deg = median(deg) % median degrees
median_deg =
Shortest Paths
We looked at degrees as a metric to evaluate nodes, and it makes sense - the more friends a node has, the better connected it is. Another common metric is shortest paths. While degrees measure direct
connections only, shortest paths consider how many hops at mininum you need to make to traverse from one node to another. Let's look at an example of the shortest path between the top node 1888 and
another node 483.
[path, d] = shortestpath(G,top3{1},'483'); % get shortest path
H = plot(G,'MarkerSize',1,'EdgeColor',edgeC); % plot graph
highlight(H,path,'NodeColor','r', ... % highlight path
labelnode(H,path, [{'Top node'} path(2:end)]) % label nodes
title('Shortest Path between Top Node and Node 483')% add title
text(1,-3,sprintf('Distance: %d hops',d)) % annotate
Closeness Centrality
Distances measured by shortest paths can be used to compute closeness centrality, as defined in Wikipedia. Let's reload the pre-computed distances using the spdist function I wrote. Those with high
closeness scores are the ones you want to start with when you want to spread news through your ego network.
load dist
closeness = 1./sum(dist); % compute closeness
[~, ranking] = sort(closeness, 'descend'); % get ranking by closeness
top3 = G.Nodes.Name(ranking(1:3)); % get top 3 node names
colormap cool % set color map
H = plot(G,'MarkerSize',closeness*10000, ... % node size by closeness
'NodeCData',closeness,... % node color by closeness
labelnode(H,top3,{'#1','#2','#3'}); % label top 3 nodes
title({sprintf('Ego Network of Node %d', ... % add title
egoids(idx)); 'colored by Closeness Centrality'})
text(-1,-3,['top 3 nodes: ',strjoin(top3)]) % annotate
H = colorbar; % add colorbar
ylabel(H, 'closeness') % add metric as ylabel
Can You Use Your Own Facebook Data?
Hopefully this post has provided a sufficient basis for further exploration with the SNAP Collection dataset. You may also want to try this on your own data. Unfortunately, you can't analyze your own
Facebook friends graph because Facebook discontinued this API service. You can, however, use apps like Netvizz to extract a "page like network", which represents Facebook pages connected through
likes. Here is the plot that shows the network of Facebook pages connected to the MATLAB page through likes using a pre-computed graph. Because this is a directed graph, we will use in-degree as the
metric. It means we only count when a page is liked by other pages, but not when it likes others.
load fbpagegraph % reload data
deg = indegree(G); % get in-degrees
[~,ranking] = sort(deg,'descend'); % rank by in-degrees
top5 = G.Nodes.Name(ranking(1:5)); % get top 5
colormap cool % set colormap to cool
H = plot(G,'MarkerSize',log(deg+2)*2, ... % log scale node size by in-degree
'NodeCData',deg, ... % color by in-degree
'EdgeColor',edgeC,'EdgeAlpha', 0.3);
labelnode(H,'MATLAB','MATLAB') % highlight MATLAB
labelnode(H,top5,{'Make: Magazine','NOAA','NWS','Maker Faire Rome','Maker Faire'})
H = colorbar; % add colorbar
ylabel(H, 'in-degrees') % add metric
title('Facebook Page Like Network Colored by In-Degree')
text(-2.8,3.5,'a network of pages connected through likes (directed)')
ann = {lab,top5}; % generate label
text(pos(:,1),pos(:,2),strcat(ann{:})) % add annotations
We only scratched the surface with the SNAP Collection - just one ego network out of 10 for Facebook, and each comes with more anonymized meta data, such as eduction, hometown, etc. and you can
figure out what binds those close-knit friends by analyzing common attributes. Furthermore, the SNAP Collection also includes datasets from other sources, sugh as Twitter and Google Plus. You can
also use Netvizz to extract data on Facebook pages you liked. Play around with those datasets and let us know what you find!
To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
|
{"url":"https://blogs.mathworks.com/loren/2016/02/03/visualizing-facebook-networks-with-matlab/?","timestamp":"2024-11-10T15:37:14Z","content_type":"text/html","content_length":"199047","record_id":"<urn:uuid:f760b0e9-5fb1-4b81-b7fd-06a31767b095>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00730.warc.gz"}
|
17. Which of the following numbers give a remainder of 2 when divided by 5 and a remainder of 1 when divided by 4? A. 7 C. 17 B. 12 D. 31
17. Which of the following numbers give a remainder of 2 when divided by 5 and a remainder of 1 when divided by 4? A. 7 C. 17 B. 12 D. 31
Solution 1
The answer is D. 31.
Here's why:
Step 1: Check each option by dividing it by 5 and see if the remainder is 2.
A. 7 divided by 5 gives a remainder of 2. B. 12 divided by 5 gives a remainder of 2. C. 17 divided by 5 gives a remainder of 2. D. 31 divided by 5 gives a remainder of 1.
So, options A, Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
|
{"url":"https://knowee.ai/questions/3159860--which-of-the-following-numbers-give-a-remainder-of-when-divided-by","timestamp":"2024-11-06T10:42:45Z","content_type":"text/html","content_length":"364120","record_id":"<urn:uuid:0b71fd4d-6874-49d5-a1b4-e5d9f4a9cd76>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00439.warc.gz"}
|
Weekday Escape N°99 - Walkthrough Guides, Reviews, Discussion, Hints and Tips at Jay is games
then hold them, because to help distract from the inexorable march of time, this week's Weekday Escape serves up some of your favourite creators in a big way! TomaTea brings you something sweet to
eat, but you'll need your most obnoxious fake French laugh to get it. Neutral has the place all done up for Halloween for you, though the language may be a barrier for some. And Ichima offers you a
cozy place to get snuggled up and wind down with one fantastic view.
Candy Shop Escape - "But Dora," I can hear you saying, "why would you possibly want to escape from a candy store? Much less one designed by TomaTea?" Well, to put it bluntly, if it were a Pizza and
Garlic Bread store, I would feel quite differently, since I've never been much for sweets, even those hiding cunning puzzles and clues. As it is, I'm going to need your help to get out of this swanky
little boulangerie, so let's get clicking!
Wicked Room - If you can't read Japanese, this mini room escape by none other than Neutral might give you a hard time, at least when it comes to reading the paper clues you'll find. I suggest we
give it the old college try, clean up that broken glass, and then throw something spooky, old, and grainy on the television. If you're *filled with d e t e r m i n a t i o n, however, well, I guess
those ghosts and ghouls better beware.
Room 10: Heart - So maybe you want a room escape game that's a little more cozy? That's cool, I gotchu. Or, well, Ichima does, so if you want to be locked up somewhere that seems expressly designed
for snuggling with a view, look no further. With some gorgeous windows and a big comfy bed, this is one set of puzzles you can solve in comfort. So leave your shoes and your cares at the door, and
some space for me on that bed, because I am all about a nap, like, you don't even know.
We love escape games, and our readers love talking about them and sharing hints! How about you? Let us know what you think, ask for clues, or help out other players in the comments below.
Walkthrough Guide
(Please allow page to fully load for spoiler tags to be functional.)
There are only two parts where you need to read the Japanese in Wicked Room:
You don't actually need to read until you have picked up the fourth set of notes. Note however that most puzzles cannot be brute-forced until you have seen the required information to solve them.
Reading through the fourth set of notes will unlock the hotspot on the closet. This will make a different sound when you click near it.
You should have five sets of notes for the final puzzle. Taking symbols from the eleven notes will give you the clue in Japanese. You could brute-force the required combination (after viewing all the
required information), but here it is:
Yellow eye, red hand and skull, green cat.
How to figure out the math puzzle in Wicked Escape
First, note that the white shapes are different numbers than the black shapes, and each shape is a different number. Also, remember that zero is a perfectly good number, and that when you're adding
two base-10 numbers, the most you're ever going to carry is a 1.
We know f is 3.
Starting from the end, cc can only be 11, since even the next possibility (22) would require us to carry a 4, which ain't happening. (Two times 9, the largest digit, is 18, plus 4 is 22.) This means
that a must be 5, and we need to carry a 1 from the previous column.
There isn't an integer solution to 2b = 3, so we need to carry a 1 from c+d = g. This means that b is either 1 or 6, but a 1 wouldn't let us carry anything to the next column. So b = 6.
This gets us to:
The remaining digits are 0, 2, 4, 7, 8, 9.
The unique digits constraint means that e can't be 0 (because then h would need to be 6, which is already taken), 7 (because 3 is taken), or 9 (because 5 is taken). If we try 8 in that spot, that
means h is 4 and we carry a 1, but then d must be 9 (because we need to carry a 1), but that makes g a 1, which is already taken. If we try 4 as e, then d can be 8 (since we already know 9 would be a
problem), but that makes both g and h into zeroes. So e must be 2, which makes h an 8.
This means that d must be 9, since that's the only number that would let us carry the one, and that makes g into a 0.
So the equation is
and the shapes are: star = 8, circle = 2, square = 9, diamond = 0.
Candy Shop Escape Walkthrough/Solutions Guide
The Puzzles
Drawer 1 (up-down buttons)
Clue: cake slices
on the shelves, there are 6 slices of cake, 3 on the left and 3 on the right. Note which way the slices point.
up, down, up, up, down, up
Get: donut
Drawer 2 (4 colors)
Clue: cake squares with berries
Each table has a square of cake. Note the color of the icing, as well as the number of berries on top.
red: 3, blue: 2, yellow: 4, green: 1.
1 click, 4 clicks, 2 clicks, 3 clicks
Get: puzzle piece
Drawer 3 (4 numbers)
(This'll be the last puzzle you solve) You will need:
• Green lollipop
Solve the cupcake picture
• Pink donut
1st drawer
• Candy cane #1
in one of the cupcakes above the brown gelatin dessert
• Candy cane #2
4th drawer
• Candy cane #3
5th drawer
• Candy cane #4
4x4 picture
• Broken fork
Get the (whole) fork from the red cake, then use it to pry open the green canister, breaking it in the process
Put everything in place, then do some math.
The lollipop and the donut should be obvious. Use the broken fork to open the cabinet under the two displays, then put the candy canes in the indentations. Note the shape made by the canes.
It's supposed to be a multiplication sign. Yes, I know.
Look at the faces of the two displays and note the numbers in the highlighted wedges.
Lollipops = 47, donuts = 63
Do the math.
Get: the key!
Drawer 4 (4 letters)
Clue: canisters plus letter hint (see 5th drawer)
The canisters have numbers on the underside of their lids. Use the fork (from the red cake square) to pry open the green canister.
The drawer hint says O=15; that's a letter O, not a numeral zero.
It's a standard "number the letters of the alphabet" cypher. Brown = 3 = C, yellow = 1 = A, green = 6 = F, pink = 5 = E.
Get: candy cane
Drawer 5 (4 shapes)
Clue: 3 pink cupcakes on bottom right shelf
Count each shape. Hint: there's more than one circle.
1 square, 2 hearts, 3 stars, 4 circles
Get: candy cane, O=15 clue
Drawer 6 (4x4 buttons)
Clue: the incomplete puzzle
Note which pieces are missing from the puzzle. Click those buttons.
Bottom left to top right diagonal, plus bottom right corner
Get: puzzle piece
Cupcake picture
Clue: gumball machine and the cupcake on the shelf that greatly resembles the picture (on the right above the pink gelatin dessert).
That thing you get from the gumball machine is a maraschino cherry. Yes, really. (Solve the triangles picture to get the coin.)
Put the cherry on the cupcake, then note the colors (green stem, red cherry, brown frosting, pink cupcake, yellow paper). Set the cupcake picture accordingly.
stem = 3 clicks, cherry = 4 clicks, frosting = 5 clicks, cupcake = 1 click, paper cup = 2 clicks.
Get: green lollipop
4x4 picture
Clue: under yellow cake triangle; also need to solve the puzzle picture.
Put the teapot in the top left dot, then move it along the path indicated by the clue by clicking each dot in turn.
Get: candy cane
Triangles picture
Clue: the cake squares on the tables and the yellow cake triangle on the counter.
The yellow triangle is for orientation; you need to set the corner squares to the appropriate colors. The middle "+" shape remains black, except for that yellow wedge of course.
Top left = green (2 clicks on each wedge), top right = red (1 click), bottom left = yellow (3 clicks), bottom right = blue (4 clicks).
Get: coin
Puzzle picture
• 2nd drawer
• Behind pink gelatin dessert
• 6th drawer
• On table with green cake
• On the counter next to yellow cake triangle
The cup of coffee (or is that tea?) goes in the middle of the top row. The strawberries are mostly on the leftmost square of the 2nd row from the top. You should be able to figure it out from there.
Get: strange spiky teapot
Order of Operations (aka Walkthrough Schematic)
1. Drawer 1
2. Drawer 2
3. Drawer 5
4. Drawer 4
5. Drawer 6
6. Puzzle picture
7. 4x4 picture
8. Triangle picture
9. Cupcake picture
10. Drawer 3
Wicked Room Walkthrough
For this, you need the clue page set #2, which is along the right wall, to the right of the pumpkin picture/decal.
The sequence for pressing the buttons is right there on the card.
left, right, top, bottom, top.
Get: a pumpkin. Well, except you can't actually take it out. And if you close the box, you need to solve it again if you want to open it. So make sure you note everything that's notable about the
Pumpkin Math
Clue page #1 is on the couch. Underneath it is a formula for getting a key.
Note the directions the triangles are pointing.
The pumpkin in the middle of the floor has upward-pointing triangles for the eyes, and it has an 11 written on its bottom. The pumpkin on the right wall has its eyes pointing at each other, and if
you peel up the bottom, it says 31. And the pumpkin in the hatbox had downward-pointing eyes and had a 10 on its bottom.
Plug in the numbers and get a 3-digit number. (Remember your order of operations: multiply first, then add.)
Now, where could you use a three-digit number?
The only place is that Mondrianic picture on the wall.
Get: clue papers #3, #4, and #5, plus some shape math.
Shape math
In the clue packet you get from the picture, there's a card with shapes on it. Solve for the white shapes.
In addition to the black star = 3 hint that's right on the card, you also need to know that the white shapes are different numbers than the black shapes, and each shape is a different digit. (Oh, and
0 is a perfectly valid digit.)
I've already posted a detailed explanation for solving the equation, so I won't repeat it here. The solution is
and the shapes are: star = 8, circle = 2, square = 9, diamond = 0.
Get: code for the restroom door. (Make sure you write it down or remember it, because the bathroom door locks again if you leave.)
There's some writing on the right wall of the restroom. Most of it is useless to us clueless westerners, but that "SOaP-ElSE" on the top looks... intriguing.
Look in the mirror.
That's not SOaP-ElSE, that's 3213-9602.
Dial the number (the phone is in the alcove right above the writing) and listen. Why is it ringing twice?
It's not one phone ringing twice, it's two different phones. Go back in the room and look under the couch for your wayward cell phone (admit it: you use this method to find your phone all. the.
time), and some clue papers.
Get: clues 6-9, and a very strange crossword. Or word search. Or something.
After reentering the bathroom code (Neutral, did you get so exhausted by the out-of-this-world Elements that you forgot to pay attention to these little details?), look at the blue box, the very
strange crossword, and the maze on the wall under the broken green cabinet.
Read the "crossword" as given by the maze, then do as instructed.
(Key)=GEA PUSH"OPEN"TEN TIMES
Get: extension cord
Halloween Night
If you've been observant, you know exactly where the extension cord is needed.
Plug in the "Halloween Night" sign, silly.
Wonder what this button does?
Press the button repeatedly, and notice what the sign does each time.
First, most of the letters turn red, but "o", "w", and "t" remain dark. Next, the letters turn blue, except for "e", "n", "N", and "i". Then we get green with "o", "e", and "n" staying off, and
lastly, it's white, with "e" and "ight" dark.
Get: a color + number clue.
Use the clue from the Halloween Night sign to open the padlock and get the corkscrew.
The dials are in the same order as the clues.
Now, why on earth do we need a corkscrew? The only wine bottle in the room is in teeny-tiny pieces.
Examine the corkscrew to get a key.
First, the obvious one: use the corkscrew/key to open the bottom half of the broken green cabinet. Get a flashlight. (No batteries, of course.)
Next, a bit of language barrier to overcome: one of the clue papers says that the brown cabinet door isn't locked, it's just stuck. (Or something to that effect.)
Try knocking on the edges and corners of the cabinet door; one spot sounds different. Knock there repeatedly, then try the handle.
If you don't have sound, it's the bottom left corner.
Get: clue papers #10 and #11, and a card that says TEXT.
Hmm, TEXT and EXIT have an awful lot of letters in common. And that door is the same color as the the last T in TEXT.
Click the E, X, and T of EXIT, then click the grid below to form a blue T shape.
Freedom! Or maybe not. That's one dark hallway, sure wish the flashlight had batteries.
Look at the lamp, or rather, next to the lamp.
Get the battery and put it into the flashlight. Don't worry about the switch not staying on; you can still use the flashlight.
Use the flashlight to examine the hallway.
The switch on the right doesn't seem to do anything, and won't stay down. There's a pit in the middle of the floor. On the left, there's a calendar with some days circled, and a panel of some sort
with a big red button. Go ahead and press the big red button.
This is language barrier #2, and way more serious than the first: you need to combine the circled days on the calendar with the corresponding clue papers to get certain words/syllables, which you
then combine into colors for the eye, hand, skull, and cat. Yeah, just go ahead and read the solution.
Yellow eye, red hand and skull, green cat.
Colorblind: that's 4 clicks on the eye, 1 click each on the hand and skull, and 6 clicks on the cat.
Once you've set the picture colors correctly, press down the switch on the right, then wait. Eventually, a bit of checkerboard floor fills the pit, and there's all sorts of goodies on it.
Examine the goodies.
The key is right there in the middle. And you don't even have to use it, it's enough to just touch it. (Must be a portkey.)
Room 10: Heart Walkthrough
As per my usual, I'm assuming you've already taken a look around the room, and have at least a vague idea of where things are, so there will be no "turn right twice and click the doohickey" types of
instructions. There will be colorblind instructions for the one puzzle that needs them.
Nightstand box
The clue is one of the cowbells around the room; just make sure it matches exactly.
Only one of the cowbells has a small button on top and a large button on the bottom.
It's the one on the exit door (the one with no window): blue red yellow yellow red blue (that's left to right, top to bottom).
Colorblind: 1 click = red, 2 clicks = yellow, 3 clicks = blue, then 4 clicks takes you back to red. If you zoom out, it'll reset to all white.
Get: a diamond-shaped chocolate
Top desk drawer (3 buttons)
First, find a knob.
Check the alphorn.
Next, find a place to use said knob. (It might help if you turn the knob around and look at its post.)
Maybe that other drawer with three buttons? With that flat slot underneath?
Look at the table to the right of the curtains. Put the knob into the slot. Turn the knob to each position and open the drawer.
Left: coat hook. Middle: picture of the Matterhorn. Right: tea kettle.
Now it's time to find some, um, "numbers".
Look for each of the items in the room, and note the symbols on it.
For example, underneath the teapot's lid (on the desk) there are three symbols. The first looks like a Roman numeral 3, then there's something that looks like an 8 with an extra crossbar, then
there's a backwards 4. (I'm not even going to attempt to describe the symbols on the coat hook.)
Where else have we seen symbols like this?
Check the book (on the desk). The pages are "numbered" with exactly the same symbols.
Translate the symbols on the items into numbers, then press the corresponding buttons.
Left, middle, right, right, left, left, right, middle.
Get: a back-scratcher
Bottom desk drawer (4 digits)
The book tells you how to get this number.
The second page gives the height of the Matterhorn as 4478 meters, and the third and fourth pages say to subtract from this height a number that has to do with a house with trees next to it.
Hmm... maybe that's not a house, maybe that's a cuckoo clock.
Use the back-scratcher to pull the chain of the clock repeatedly and note the numbers. (You'll need to extend the back-scratcher first.)
Do the math to get
Get: a key
Arrow box in closet
(The closet is the door with a window. The key is in the bottom desk drawer.) You need two sets of arrow clues.
One set ("B") is right there on the skis. The other is in the main room, but it's initially hidden.
Open the curtains to see the "A" arrows.
Put the two clues together as indicated.
Down, left, right, left, down, right, up, down. (Note that those are the directions the arrows are pointing, but if you look at the flower on the box, you're actually pressing opposite to those
directions. E.g. the arrow pointing left is the right side of the flower.)
Get: a round chocolate
Middle desk drawer
The key is in the right bedpost. Well, gee, that got us far. Not.
Box in middle desk drawer
There are five pictures in green frames in the room and the closet. Put them together.
The picture in the closet says "CHEESES". The four pictures in the room all have words that happen to start with those letters (egg, cowbell, swiss, honey).
TR, BR, TL, TL, BL, TL, BL
Get: a square chocolate
Round box connected to blue picture
On the table to the right of the curtains, there's a round pink box. In the box, there are three indentations: square, heart-shaped, and circular.
You have things to put in the diamond-shaped and circular spaces: the chocolates from the nightstand and arrow box. You have a third chocolate, but it's the wrong shape. If only there were a way to
melt it down and reshape it.
In the closet, when you're looking at the skis, there's a darker spot on the floor. Use the ski pole to smash it open.
This gets you a heart-shaped mold. Now for a source of heat...
Look at the radiator in the closet. Is it on?
Turn the knob so the red lines match up.
Put the square chocolate into the dish that's conveniently sitting on the radiator. Wait for it to melt (which in game world means zoom out and come back). Collect the dish.
Look at the heart-shaped mold, then pour in the melted chocolate.
Put the chocolates into the round pink box, then get the key from behind the picture.
24 Comments
I'm afraid that not being able to read Japanese makes the last puzzle of Wicked Room actually impossible.
Stuck in Wicked Room. I have:
A torch with no battery
A whole bunch of notes in Japanese.
Have I reached the point where I can't go further without knowing what the notes say?
POP! (Well, power of desperate clicking!)
I've now opened the EXIT door and acquired a battery. Looked through the exit hall, but no idea what to do next. I guess this is the last puzzle?
Dora, I am 100% behind you if you want to Kickstart that Pizza and Garlic Bread Store idea. :-)
• October 28, 2015 1:34 PM replied to Daibhid C
Yes, and you cannot solve it if you can't read Japanese.
You have to read the words on each note that correspond to the circled days on the calendar, and that tells you which picture is which color. I had to look it up myself. The eye is yellow, the hand
and skull are red, and the cat is green.
• October 28, 2015 1:37 PM replied to Night Stryke
Thank you!
It was a great game right up until that point.
There are only two parts where you need to read the Japanese in Wicked Room:
You don't actually need to read until you have picked up the fourth set of notes. Note however that most puzzles cannot be brute-forced until you have seen the required information to solve them.
Reading through the fourth set of notes will unlock the hotspot on the closet. This will make a different sound when you click near it.
You should have five sets of notes for the final puzzle. Taking symbols from the eleven notes will give you the clue in Japanese. You could brute-force the required combination (after viewing all the
required information), but here it is:
Yellow eye, red hand and skull, green cat.
Question about Wicked Room.
I watched a video explaining how to open up the brown cabinet that is to the left of the couch and Halloween Night sign.
I tried what the video did and was able to open the cabinet.
But I do not understand how one could figure it out from the clues in the game.
I just clicked on the bottom left a few times and then on the bottom of the cabinet and it popped open. Made no sense to me at all.
I will say that the message clue with the various shapes - circle, square, star, diamond - was quite interesting and challenging. I was stuck for a while until I realized that...
the filled in shapes represented different numbers than the white (or unfilled) shapes. And that each number had to be different. If you don't apply those rules, you cannot figure out the numbers.
• October 28, 2015 4:06 PM replied to WillYum
I assume the explanation of why the cabinet opens the way it does is contained in the text of one or more of the cards we get, but since it's in Japanese, we (well, most of us) can't read it.
• October 28, 2015 5:21 PM replied to Reka
I think I can offer an explanation for the cabinet, as I remember a similar scenario in one of Neutral's older games.
The cabinet door was never locked, it was just stuck. Striking the door in that spot got it unstuck.
Candy Shop
Supposedly I have the clue to solve the drawer with the 4letter word, but I cant for the life o f me figure it out. And yes I did try cake.
How to figure out the math puzzle in Wicked Escape
First, note that the white shapes are different numbers than the black shapes, and each shape is a different number. Also, remember that zero is a perfectly good number, and that when you're adding
two base-10 numbers, the most you're ever going to carry is a 1.
We know f is 3.
Starting from the end, cc can only be 11, since even the next possibility (22) would require us to carry a 4, which ain't happening. (Two times 9, the largest digit, is 18, plus 4 is 22.) This means
that a must be 5, and we need to carry a 1 from the previous column.
There isn't an integer solution to 2b = 3, so we need to carry a 1 from c+d = g. This means that b is either 1 or 6, but a 1 wouldn't let us carry anything to the next column. So b = 6.
This gets us to:
The remaining digits are 0, 2, 4, 7, 8, 9.
The unique digits constraint means that e can't be 0 (because then h would need to be 6, which is already taken), 7 (because 3 is taken), or 9 (because 5 is taken). If we try 8 in that spot, that
means h is 4 and we carry a 1, but then d must be 9 (because we need to carry a 1), but that makes g a 1, which is already taken. If we try 4 as e, then d can be 8 (since we already know 9 would be a
problem), but that makes both g and h into zeroes. So e must be 2, which makes h an 8.
This means that d must be 9, since that's the only number that would let us carry the one, and that makes g into a 0.
So the equation is
and the shapes are: star = 8, circle = 2, square = 9, diamond = 0.
the middle drawer says O=15, the four covered jars have numbers that need translating to letters
Am I the only one who realizes that next week (Wednesday, November 4) will be the 100th Weekday Escape? Congratulations!
Thanks for all the hints everyone. You're making Wicked Room much easier to get through.
I LOVE TomaTea so much, I really do.
However, I do have one issue with this latest game.
TomaTea gives us a
inside this beautiful and rather appetising shop, but
doesn't let us use it to eat CAKE?!
Congratulations, Dora and JIG, on the upcoming 100th anniversary! I have been visiting here for many years and wanted to thank you for all the fun escaping!
• October 29, 2015 12:49 AM replied to lorac
Of course - zero, circle, letter O it always defeats me in my thought process.
Considering what happens to that particular item it's probably for the best. :(
In Candy Shop, I can't figure out the 4x4 grid, although I assume the clue is
under the triangular cake.
baileydonk, the 4x4 grid in Candy Shop:
It's hanging on the wall somewhere.
Check the unfinished puzzle to the left of the counter.
Candy Shop Escape Walkthrough/Solutions Guide
The Puzzles
Drawer 1 (up-down buttons)
Clue: cake slices
on the shelves, there are 6 slices of cake, 3 on the left and 3 on the right. Note which way the slices point.
up, down, up, up, down, up
Get: donut
Drawer 2 (4 colors)
Clue: cake squares with berries
Each table has a square of cake. Note the color of the icing, as well as the number of berries on top.
red: 3, blue: 2, yellow: 4, green: 1.
1 click, 4 clicks, 2 clicks, 3 clicks
Get: puzzle piece
Drawer 3 (4 numbers)
(This'll be the last puzzle you solve) You will need:
• Green lollipop
Solve the cupcake picture
• Pink donut
1st drawer
• Candy cane #1
in one of the cupcakes above the brown gelatin dessert
• Candy cane #2
4th drawer
• Candy cane #3
5th drawer
• Candy cane #4
4x4 picture
• Broken fork
Get the (whole) fork from the red cake, then use it to pry open the green canister, breaking it in the process
Put everything in place, then do some math.
The lollipop and the donut should be obvious. Use the broken fork to open the cabinet under the two displays, then put the candy canes in the indentations. Note the shape made by the canes.
It's supposed to be a multiplication sign. Yes, I know.
Look at the faces of the two displays and note the numbers in the highlighted wedges.
Lollipops = 47, donuts = 63
Do the math.
Get: the key!
Drawer 4 (4 letters)
Clue: canisters plus letter hint (see 5th drawer)
The canisters have numbers on the underside of their lids. Use the fork (from the red cake square) to pry open the green canister.
The drawer hint says O=15; that's a letter O, not a numeral zero.
It's a standard "number the letters of the alphabet" cypher. Brown = 3 = C, yellow = 1 = A, green = 6 = F, pink = 5 = E.
Get: candy cane
Drawer 5 (4 shapes)
Clue: 3 pink cupcakes on bottom right shelf
Count each shape. Hint: there's more than one circle.
1 square, 2 hearts, 3 stars, 4 circles
Get: candy cane, O=15 clue
Drawer 6 (4x4 buttons)
Clue: the incomplete puzzle
Note which pieces are missing from the puzzle. Click those buttons.
Bottom left to top right diagonal, plus bottom right corner
Get: puzzle piece
Cupcake picture
Clue: gumball machine and the cupcake on the shelf that greatly resembles the picture (on the right above the pink gelatin dessert).
That thing you get from the gumball machine is a maraschino cherry. Yes, really. (Solve the triangles picture to get the coin.)
Put the cherry on the cupcake, then note the colors (green stem, red cherry, brown frosting, pink cupcake, yellow paper). Set the cupcake picture accordingly.
stem = 3 clicks, cherry = 4 clicks, frosting = 5 clicks, cupcake = 1 click, paper cup = 2 clicks.
Get: green lollipop
4x4 picture
Clue: under yellow cake triangle; also need to solve the puzzle picture.
Put the teapot in the top left dot, then move it along the path indicated by the clue by clicking each dot in turn.
Get: candy cane
Triangles picture
Clue: the cake squares on the tables and the yellow cake triangle on the counter.
The yellow triangle is for orientation; you need to set the corner squares to the appropriate colors. The middle "+" shape remains black, except for that yellow wedge of course.
Top left = green (2 clicks on each wedge), top right = red (1 click), bottom left = yellow (3 clicks), bottom right = blue (4 clicks).
Get: coin
Puzzle picture
• 2nd drawer
• Behind pink gelatin dessert
• 6th drawer
• On table with green cake
• On the counter next to yellow cake triangle
The cup of coffee (or is that tea?) goes in the middle of the top row. The strawberries are mostly on the leftmost square of the 2nd row from the top. You should be able to figure it out from there.
Get: strange spiky teapot
Order of Operations (aka Walkthrough Schematic)
1. Drawer 1
2. Drawer 2
3. Drawer 5
4. Drawer 4
5. Drawer 6
6. Puzzle picture
7. 4x4 picture
8. Triangle picture
9. Cupcake picture
10. Drawer 3
Wicked Room Walkthrough
For this, you need the clue page set #2, which is along the right wall, to the right of the pumpkin picture/decal.
The sequence for pressing the buttons is right there on the card.
left, right, top, bottom, top.
Get: a pumpkin. Well, except you can't actually take it out. And if you close the box, you need to solve it again if you want to open it. So make sure you note everything that's notable about the
Pumpkin Math
Clue page #1 is on the couch. Underneath it is a formula for getting a key.
Note the directions the triangles are pointing.
The pumpkin in the middle of the floor has upward-pointing triangles for the eyes, and it has an 11 written on its bottom. The pumpkin on the right wall has its eyes pointing at each other, and if
you peel up the bottom, it says 31. And the pumpkin in the hatbox had downward-pointing eyes and had a 10 on its bottom.
Plug in the numbers and get a 3-digit number. (Remember your order of operations: multiply first, then add.)
Now, where could you use a three-digit number?
The only place is that Mondrianic picture on the wall.
Get: clue papers #3, #4, and #5, plus some shape math.
Shape math
In the clue packet you get from the picture, there's a card with shapes on it. Solve for the white shapes.
In addition to the black star = 3 hint that's right on the card, you also need to know that the white shapes are different numbers than the black shapes, and each shape is a different digit. (Oh, and
0 is a perfectly valid digit.)
I've already posted a detailed explanation for solving the equation, so I won't repeat it here. The solution is
and the shapes are: star = 8, circle = 2, square = 9, diamond = 0.
Get: code for the restroom door. (Make sure you write it down or remember it, because the bathroom door locks again if you leave.)
There's some writing on the right wall of the restroom. Most of it is useless to us clueless westerners, but that "SOaP-ElSE" on the top looks... intriguing.
Look in the mirror.
That's not SOaP-ElSE, that's 3213-9602.
Dial the number (the phone is in the alcove right above the writing) and listen. Why is it ringing twice?
It's not one phone ringing twice, it's two different phones. Go back in the room and look under the couch for your wayward cell phone (admit it: you use this method to find your phone all. the.
time), and some clue papers.
Get: clues 6-9, and a very strange crossword. Or word search. Or something.
After reentering the bathroom code (Neutral, did you get so exhausted by the out-of-this-world Elements that you forgot to pay attention to these little details?), look at the blue box, the very
strange crossword, and the maze on the wall under the broken green cabinet.
Read the "crossword" as given by the maze, then do as instructed.
(Key)=GEA PUSH"OPEN"TEN TIMES
Get: extension cord
Halloween Night
If you've been observant, you know exactly where the extension cord is needed.
Plug in the "Halloween Night" sign, silly.
Wonder what this button does?
Press the button repeatedly, and notice what the sign does each time.
First, most of the letters turn red, but "o", "w", and "t" remain dark. Next, the letters turn blue, except for "e", "n", "N", and "i". Then we get green with "o", "e", and "n" staying off, and
lastly, it's white, with "e" and "ight" dark.
Get: a color + number clue.
Use the clue from the Halloween Night sign to open the padlock and get the corkscrew.
The dials are in the same order as the clues.
Now, why on earth do we need a corkscrew? The only wine bottle in the room is in teeny-tiny pieces.
Examine the corkscrew to get a key.
First, the obvious one: use the corkscrew/key to open the bottom half of the broken green cabinet. Get a flashlight. (No batteries, of course.)
Next, a bit of language barrier to overcome: one of the clue papers says that the brown cabinet door isn't locked, it's just stuck. (Or something to that effect.)
Try knocking on the edges and corners of the cabinet door; one spot sounds different. Knock there repeatedly, then try the handle.
If you don't have sound, it's the bottom left corner.
Get: clue papers #10 and #11, and a card that says TEXT.
Hmm, TEXT and EXIT have an awful lot of letters in common. And that door is the same color as the the last T in TEXT.
Click the E, X, and T of EXIT, then click the grid below to form a blue T shape.
Freedom! Or maybe not. That's one dark hallway, sure wish the flashlight had batteries.
Look at the lamp, or rather, next to the lamp.
Get the battery and put it into the flashlight. Don't worry about the switch not staying on; you can still use the flashlight.
Use the flashlight to examine the hallway.
The switch on the right doesn't seem to do anything, and won't stay down. There's a pit in the middle of the floor. On the left, there's a calendar with some days circled, and a panel of some sort
with a big red button. Go ahead and press the big red button.
This is language barrier #2, and way more serious than the first: you need to combine the circled days on the calendar with the corresponding clue papers to get certain words/syllables, which you
then combine into colors for the eye, hand, skull, and cat. Yeah, just go ahead and read the solution.
Yellow eye, red hand and skull, green cat.
Colorblind: that's 4 clicks on the eye, 1 click each on the hand and skull, and 6 clicks on the cat.
Once you've set the picture colors correctly, press down the switch on the right, then wait. Eventually, a bit of checkerboard floor fills the pit, and there's all sorts of goodies on it.
Examine the goodies.
The key is right there in the middle. And you don't even have to use it, it's enough to just touch it. (Must be a portkey.)
Room 10: Heart Walkthrough
As per my usual, I'm assuming you've already taken a look around the room, and have at least a vague idea of where things are, so there will be no "turn right twice and click the doohickey" types of
instructions. There will be colorblind instructions for the one puzzle that needs them.
Nightstand box
The clue is one of the cowbells around the room; just make sure it matches exactly.
Only one of the cowbells has a small button on top and a large button on the bottom.
It's the one on the exit door (the one with no window): blue red yellow yellow red blue (that's left to right, top to bottom).
Colorblind: 1 click = red, 2 clicks = yellow, 3 clicks = blue, then 4 clicks takes you back to red. If you zoom out, it'll reset to all white.
Get: a diamond-shaped chocolate
Top desk drawer (3 buttons)
First, find a knob.
Check the alphorn.
Next, find a place to use said knob. (It might help if you turn the knob around and look at its post.)
Maybe that other drawer with three buttons? With that flat slot underneath?
Look at the table to the right of the curtains. Put the knob into the slot. Turn the knob to each position and open the drawer.
Left: coat hook. Middle: picture of the Matterhorn. Right: tea kettle.
Now it's time to find some, um, "numbers".
Look for each of the items in the room, and note the symbols on it.
For example, underneath the teapot's lid (on the desk) there are three symbols. The first looks like a Roman numeral 3, then there's something that looks like an 8 with an extra crossbar, then
there's a backwards 4. (I'm not even going to attempt to describe the symbols on the coat hook.)
Where else have we seen symbols like this?
Check the book (on the desk). The pages are "numbered" with exactly the same symbols.
Translate the symbols on the items into numbers, then press the corresponding buttons.
Left, middle, right, right, left, left, right, middle.
Get: a back-scratcher
Bottom desk drawer (4 digits)
The book tells you how to get this number.
The second page gives the height of the Matterhorn as 4478 meters, and the third and fourth pages say to subtract from this height a number that has to do with a house with trees next to it.
Hmm... maybe that's not a house, maybe that's a cuckoo clock.
Use the back-scratcher to pull the chain of the clock repeatedly and note the numbers. (You'll need to extend the back-scratcher first.)
Do the math to get
Get: a key
Arrow box in closet
(The closet is the door with a window. The key is in the bottom desk drawer.) You need two sets of arrow clues.
One set ("B") is right there on the skis. The other is in the main room, but it's initially hidden.
Open the curtains to see the "A" arrows.
Put the two clues together as indicated.
Down, left, right, left, down, right, up, down. (Note that those are the directions the arrows are pointing, but if you look at the flower on the box, you're actually pressing opposite to those
directions. E.g. the arrow pointing left is the right side of the flower.)
Get: a round chocolate
Middle desk drawer
The key is in the right bedpost. Well, gee, that got us far. Not.
Box in middle desk drawer
There are five pictures in green frames in the room and the closet. Put them together.
The picture in the closet says "CHEESES". The four pictures in the room all have words that happen to start with those letters (egg, cowbell, swiss, honey).
TR, BR, TL, TL, BL, TL, BL
Get: a square chocolate
Round box connected to blue picture
On the table to the right of the curtains, there's a round pink box. In the box, there are three indentations: square, heart-shaped, and circular.
You have things to put in the diamond-shaped and circular spaces: the chocolates from the nightstand and arrow box. You have a third chocolate, but it's the wrong shape. If only there were a way to
melt it down and reshape it.
In the closet, when you're looking at the skis, there's a darker spot on the floor. Use the ski pole to smash it open.
This gets you a heart-shaped mold. Now for a source of heat...
Look at the radiator in the closet. Is it on?
Turn the knob so the red lines match up.
Put the square chocolate into the dish that's conveniently sitting on the radiator. Wait for it to melt (which in game world means zoom out and come back). Collect the dish.
Look at the heart-shaped mold, then pour in the melted chocolate.
Put the chocolates into the round pink box, then get the key from behind the picture.
In case this happens to anyone else, I encountered bugs/glitches in Wicked Room with
the hatbox
the phone in the bathroom.
For the first one,
I had to enter the correct code on the hatbox repeatedly until it arbitrarily decided to work once.
For the second,
whenever I tried to enter a number on the phone, the majority of the time it entered each number twice - i.e., if I pressed "3", the phone showed "33".
I restarted my game multiple times, but it didn't help.
If anyone else is playing on Firefox and having these problems, the only semi-solution I found was to update from the previous version to the most up-to-date version of Firefox (42.0), which was
released yesterday (3 November). After doing that and loading Wicked Room again, the bugs were still there, but their severity was lessened. For example,
it took fewer tries to get the hatbox open, and it took less than ten tries for me to be able to consecutively enter each number as a single number on the phone.
I didn't have any other problems with the game and was able to complete it successfully.
^ Scroll Up | Homepage >
Leave a comment [top of page]
Please consider creating a Casual Gameplay account if you're a regular visitor here, as it will allow us to create an even better experience for you. Sign-up here!
• You may use limited HTML tags for style:
(a href, b, br/, strong, em, ul, ol, li, code, spoiler)
HTML tags begin with a less-than sign: < and end with a greater-than sign: >. Always. No exceptions.
• To post spoilers, please use spoiler tags: <spoiler> example </spoiler>
If you need help understanding spoiler tags, read the spoiler help.
• Please Preview your comment before posting, especially when using spoilers!
• No link dropping, no domains as names; do not spam, and do not advertise! (rel="nofollow" in use)
|
{"url":"https://games.jayisgames.com/review/weekday-escape-n99.php","timestamp":"2024-11-08T14:31:37Z","content_type":"text/html","content_length":"163425","record_id":"<urn:uuid:485f9e17-4e46-4113-96f4-c89f12689936>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00030.warc.gz"}
|
Free Math Worksheets Adding Mixed Numbers
Free Math Worksheets Adding Mixed Numbers act as fundamental tools in the realm of maths, giving an organized yet functional system for learners to discover and master mathematical ideas. These
worksheets use a structured technique to comprehending numbers, nurturing a solid structure upon which mathematical proficiency grows. From the simplest checking exercises to the intricacies of
advanced estimations, Free Math Worksheets Adding Mixed Numbers satisfy students of varied ages and ability degrees.
Revealing the Essence of Free Math Worksheets Adding Mixed Numbers
Free Math Worksheets Adding Mixed Numbers
Free Math Worksheets Adding Mixed Numbers -
Adding and Subtracting with Mixed Numbers Worksheet Mixed Fraction Addition with Like Denominators 5 Worksheet 1 Browse Printable Adding Mixed Number Worksheets Award winning educational materials
designed to
Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators The arithmetic in these questions is kept simple and students can try to formulate
the answers mentally without writing down calculations These worksheets are pdf files
At their core, Free Math Worksheets Adding Mixed Numbers are automobiles for theoretical understanding. They encapsulate a myriad of mathematical concepts, directing learners through the labyrinth of
numbers with a collection of interesting and deliberate exercises. These worksheets go beyond the boundaries of conventional rote learning, motivating active involvement and promoting an instinctive
grasp of numerical connections.
Supporting Number Sense and Reasoning
Adding Mixed Fractions Worksheets
Adding Mixed Fractions Worksheets
Adding mixed numbers when the denominators of fractions are the same is always simple Step 1 Add the integers together Step 2 Add the numerator of the fractions together Step 3 If the sum of
fractions becomes an improper fraction convert it to a mixed number and write the answer
Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons These ready to use printable worksheets
help assess student learning Be sure to check out the fun interactive fraction activities and additional worksheets below
The heart of Free Math Worksheets Adding Mixed Numbers depends on cultivating number sense-- a deep understanding of numbers' meanings and affiliations. They urge exploration, inviting learners to
study arithmetic procedures, analyze patterns, and unlock the secrets of series. With thought-provoking obstacles and rational problems, these worksheets end up being gateways to refining thinking
abilities, nurturing the logical minds of budding mathematicians.
From Theory to Real-World Application
Adding Mixed Numbers Worksheet
Adding Mixed Numbers Worksheet
Free Printable Adding Mixed Numbers worksheets Adding Mixed Numbers worksheets offer a valuable resource for math teachers and students to discover and practice essential skills in adding mixed
numbers enhancing mathematical understanding and proficiency
Welcome to The Adding and Subtracting Two Mixed Fractions with Similar Denominators Mixed Fractions Results and Some Simplifying Fillable A Math Worksheet from the Fractions Worksheets Page at Math
Drills This math worksheet was created or last revised on 2023 09 15 and has been viewed 2 435 times
Free Math Worksheets Adding Mixed Numbers function as channels linking academic abstractions with the apparent facts of day-to-day life. By infusing sensible situations right into mathematical
exercises, students witness the importance of numbers in their surroundings. From budgeting and measurement conversions to understanding statistical information, these worksheets equip pupils to
possess their mathematical prowess past the boundaries of the class.
Varied Tools and Techniques
Versatility is inherent in Free Math Worksheets Adding Mixed Numbers, using a collection of pedagogical devices to deal with varied discovering designs. Aesthetic aids such as number lines,
manipulatives, and digital resources work as companions in imagining abstract concepts. This diverse method ensures inclusivity, fitting learners with various preferences, staminas, and cognitive
Inclusivity and Cultural Relevance
In a significantly diverse world, Free Math Worksheets Adding Mixed Numbers embrace inclusivity. They transcend cultural boundaries, incorporating instances and issues that resonate with learners
from diverse backgrounds. By including culturally pertinent contexts, these worksheets promote an environment where every learner really feels represented and valued, boosting their link with
mathematical concepts.
Crafting a Path to Mathematical Mastery
Free Math Worksheets Adding Mixed Numbers chart a course in the direction of mathematical fluency. They infuse willpower, crucial thinking, and problem-solving abilities, crucial features not just in
maths but in different aspects of life. These worksheets encourage students to navigate the elaborate terrain of numbers, nurturing an extensive admiration for the beauty and reasoning inherent in
Embracing the Future of Education
In an age noted by technological improvement, Free Math Worksheets Adding Mixed Numbers seamlessly adjust to digital systems. Interactive interfaces and digital resources augment standard
understanding, supplying immersive experiences that transcend spatial and temporal boundaries. This combinations of conventional techniques with technological advancements proclaims an encouraging
period in education and learning, promoting an extra dynamic and appealing discovering environment.
Verdict: Embracing the Magic of Numbers
Free Math Worksheets Adding Mixed Numbers epitomize the magic inherent in mathematics-- an enchanting trip of expedition, exploration, and proficiency. They transcend traditional rearing, functioning
as drivers for sparking the flames of inquisitiveness and inquiry. Via Free Math Worksheets Adding Mixed Numbers, learners start an odyssey, opening the enigmatic world of numbers-- one trouble, one
remedy, at once.
32 Adding And Subtracting Mixed Numbers Worksheet Support Worksheet
Adding Mixed Numbers Activity
Check more of Free Math Worksheets Adding Mixed Numbers below
Adding Mixed Numbers Worksheet Answers
Adding Mixed Numbers Worksheet
Mixed Numbers Worksheets
Adding Fractions Worksheet Pdf
Mixed Operations Math Worksheet With 3 Terms Of Negative The Adding And Subtracting Three
Adding Mixed Numbers Worksheet
Adding Mixed Numbers like Denominators K5 Learning
Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators The arithmetic in these questions is kept simple and students can try to formulate
the answers mentally without writing down calculations These worksheets are pdf files
Mixed Number Worksheets
This is a collection of improper fraction and mixed number worksheets including worksheets on adding and subtracting mixed numbers Mixed Numbers Basic Concept Mixed Numbers FREE Students write the
mixed number shown by each illustration 3rd through 5th Grades View PDF Mixed Numbers 2 Students color the sets of shapes to
Below are six versions of our grade 4 fractions worksheet on adding mixed numbers which have the same denominators The arithmetic in these questions is kept simple and students can try to formulate
the answers mentally without writing down calculations These worksheets are pdf files
This is a collection of improper fraction and mixed number worksheets including worksheets on adding and subtracting mixed numbers Mixed Numbers Basic Concept Mixed Numbers FREE Students write the
mixed number shown by each illustration 3rd through 5th Grades View PDF Mixed Numbers 2 Students color the sets of shapes to
Adding Fractions Worksheet Pdf
Adding Mixed Numbers Worksheet
Mixed Operations Math Worksheet With 3 Terms Of Negative The Adding And Subtracting Three
Adding Mixed Numbers Worksheet
Adding Mixed Numbers With Like Denominators Worksheets Adding Mixed Number Mixed Numbers
Adding Mixed Numbers Printable Worksheets Free Printable Worksheet
Adding Mixed Numbers Printable Worksheets Free Printable Worksheet
Adding Mixed Numbers Worksheet
|
{"url":"https://szukarka.net/free-math-worksheets-adding-mixed-numbers","timestamp":"2024-11-08T11:36:02Z","content_type":"text/html","content_length":"25546","record_id":"<urn:uuid:4f69be7c-6c29-4f46-93fc-09d1cbdfe7d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00352.warc.gz"}
|
Introduction To Statistics Books ( Free )
PDF Drive is your search engine for PDF files. As of today we have 75,647,560 eBooks for you to download for free. No annoying ads, no download limits, enjoy it and don't forget to bookmark and share
the love!
Introduction To Statistics Books
“ Where there is ruin, there is hope for a treasure. ” ― Rumi
Ask yourself:
What do I allow to distract me from really living?
|
{"url":"http://www.pdfdrive.com/introduction-to-statistics-books.html","timestamp":"2024-11-07T17:12:00Z","content_type":"application/xhtml+xml","content_length":"57892","record_id":"<urn:uuid:934dd088-be94-4891-b548-65648ab2995f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00226.warc.gz"}
|
Introductory Chemical Engineering Thermodynamics, 2nd ed.
US Edition (1st printing)
(2.08), (2.09) - The values for the Cp contants in the table are for Cp/R. The calculated values for H are correct.
(5.13),(5.14) - The solutions are switched.
(6.08) - (a) Method 1: add +R to the RHS of the first derivative. Method 2: add RV^2/(V-b) to the RHS of the first derivative. For both methods, correct the remaining algebra. (b) replace (dV/dT)_p
with T(dV/dT)_p.
(7.05) - the extra plots on the left side should be ignored.
(8.09) - P[2] is 5x too large, so (H-H^ig)[2] is also 5x too large. Add 8000 to the given ΔH.
(8.15) (c) last line, each numerical value should be negative.
(10.04) - The temperatures are out of range for the Antoine coefficients. The shortcut Psat equation may give more accurate results.
(11.06) - (a),(b) The solution is for 50C. The answers at 30C, for (a), multiply the given P by 0.436. Multiply the given y1 by 1.065; for (b) multiply the given P by 0.416. Multiply the given x1 by
(11.16) (c) The value of A12 is wrong, multiply the given value by 1.667. The remainder of the calculations use the correct value for A12.
(12.01) (d) multiply the given A12 by 1.605, multiply A21 by 0.8174. At x1 = 0.1, multiply g1 by 3.76, muliply g2 by 1.013. Multiply given P by 1.245.
(14.20) - volumes were reversed in the US 1^st-3^rd and 1^st INT printing. The values in the solution are fine. The note about which printings have the reversed values is wrong.
(15.15) - The calculations in the solution manual use small kij values and student solutions will be slightly different.
(18.03) - [F-] should not appear in the proton condition, changing the balance. Add about 3.7 to the given pH.
International Edition (1st printing)
(2.06) - The values for the Cp contants in the table are for Cp/R. The calculated values for H are correct.
(6.07) - (a) Method 1: add +R to the RHS of the first derivative. Method 2: add RV^2/(V-b) to the RHS of the first derivative. For both methods, correct the remaining algebra. (b) replace (dV/dT)_p
with T(dV/dT)_p.
(7.05) - the extra plots on the left side should be ignored.
(8.06) - P[2] is 5x too large, so (H-H^ig)[2] is also 5x too large. Add 8000 to the given ΔH.
(8.09) (c) last line, each numerical value should be negative.
(10.03) - The temperatures are out of range for the Antoine coefficients. The shortcut Psat equation may give more accurate results.
(11.09) (c) The value of A12 is wrong, multiply the given value by 1.667. The remainder of the calculations use the correct value for A12.
(11.15) - (a),(b) The solution is for 50C. The answers at 30C, for (a), multiply the given P by 0.436. Multiply the given y1 by 1.065; for (b) multiply the given P by 0.416. Multiply the given x1 by
(12.04) (d) multiply the given A12 by 1.605, multiply A21 by 0.8174. At x1 = 0.1, multiply g1 by 3.76, muliply g2 by 1.013. Multiply given P by 1.245.
(14.17) - volumes were reversed in the US 1^st-3^rd and 1^st INT printing. The values in the solution are fine. The note about which printings have the reversed values is wrong.
(15.13) - The calculations in the solution manual use small kij values and student solutions will be slightly different.
(18.02) - [F-] should not appear in the proton condition, changing the balance. Add about 3.7 to the given pH.
|
{"url":"https://chethermo.net/instructors/errata","timestamp":"2024-11-07T10:55:24Z","content_type":"text/html","content_length":"12881","record_id":"<urn:uuid:adc064af-8786-422a-aee3-4a51945de3cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00571.warc.gz"}
|
Excel 3D reference: refer to the same cell or range in multiple worksheets
This short tutorial explains what Excel 3-D reference is and how you can use it to reference the same cell or a range of cells in all selected sheets. You will also learn how to make a 3-D formula to
aggregate data in different worksheets, for example sum the same cell from multiple sheets with a single formula.
One of Excel's greatest cell reference features is a 3D reference, or dimensional reference as it is also known.
A 3D reference in Excel refers to the same cell or range of cells on multiple worksheets. It is a very convenient and fast way to calculate data across several worksheets with the same structure, and
it may be a good alternative to the Excel Consolidate feature. This may sound a bit vague, but don't worry, the following examples will make things clearer.
What is a 3D reference in Excel?
As noted above, an Excel 3D reference lets you refer to the same cell or a range of cells in several worksheets. In other words, it references not only a range of cells, but also a range of worksheet
names. The key point is that all of the referenced sheets should have the same pattern and the same data type. Please consider the following example.
Supposing you have monthly sales reports in 4 different sheets:
What you are looking for is finding out the grand total, i.e. adding up the sub-totals in four monthly sheets. The most obvious solution that comes to mind is add up the sub-total cells from all the
worksheets in the usual way:
But what if you have 12 sheets for the whole year, or even more sheets for several years? This would be quite a lot of work. Instead, you can use the SUM function with a 3D reference to sum across
This SUM formula performs the same calculations as the longer formula above, i.e. adds up the values in cell B6 in all the sheets between the two boundary worksheets that you specify, Jan and Apr in
this example:
Tip. If you intend to copy your 3-D formula to several cells and you don't want the cell references to change, you can lock them by adding the $ sign, i.e. by using absolute cells references like =
You don't even need to calculate a sub-total in each monthly sheet - include the range of cells to be calculated directly in your 3D formula:
If you want to find out the sales total for each individual product, then make a summary table in which the items appear exactly in the same order as in monthly sheets, and input the following 3-D
formula in the top-most cell, B2 in this example:
Remember to use a relative cell reference with no $ sign, so the formula gets adjusted for other cells when copied down the column:
Based on the above examples, let's make a generic Excel's 3D reference and 3D formula.
Excel 3-D reference
First_sheet:Last_sheet!cell or
Excel 3-D formula
=Function(First_sheet:Last_sheet!cell) or
When using such 3-D formulas in Excel, all worksheets between First_sheet and Last_sheet are included in calculations.
Note. Not all Excel functions support 3D references, here is the complete list of functions that do.
How to create a 3-D reference in Excel
To make a formula with a 3D reference, perform the following steps:
1. Click the cell where you want to enter your 3D formula.
2. Type the equal sign (=), enter the function's name, and type an opening parenthesis, e.g. =SUM(
3. Click the tab of the first worksheet that you want to include in a 3D reference.
4. While holding the Shift key, click the tab of the last worksheet to be included in your 3D reference.
5. Select the cell or range of cells that you want to calculate.
6. Type the rest of the formula as usual.
7. Press the Enter key to complete your Excel 3-D formula.
How to include a new sheet in an Excel 3D formula
3D references in Excel are extendable. What it means is that you can create a 3-D reference at some point, then insert a new worksheet, and move it into the range that your 3-D formula refers to. The
following example provides full details.
Supposing it's only the beginning of the year and you have data for the first few months only. However, a new sheet is likely to be added each month and you'd want to have those new sheets included
in your calculations as they are created.
For this, create an empty sheet, say Dec, and make it the last sheet in your 3D reference:
When a new sheet is inserted in a workbook, simply move it anywhere between Jan and Dec:
That's it! Because your SUM formula contains a 3-D reference, it will add up the supplied range of cells (B2:B5) in all the worksheets within the specified range of worksheet names (Jan:Dec!). Just
remember that all of the sheets included in an Excel 3D reference should have the same data layout and the same data type.
How to create a name for an Excel 3-D reference
To make it even easier for you to use 3D formulas in Excel, you can create a defined name for your 3D reference.
1. On the Formulas tab, go to the Defined Names group and click Define Name.
2. In the New Name dialog, type some meaningful and easy-to remember name in the Name box, up to 255 characters in length. In this example, let it be something very simple, say my_reference.
3. Delete the contents of the Refers to box, and then enter a 3D reference there in the following way:
□ Type = (equal sign).
□ Hold down Shift, click the tab of the first sheet you want to reference, and then click the last sheet.
□ Select the cell or range of cells to be referenced. You can also reference an entire column by clicking the column letter on the sheet.
In this example, let's create an Excel 3D reference for the entire column B in sheets Jan through Apr. As the result, you'll get something like this:
4. Click the OK button to save the newly created 3D reference name and close the dialog. Done!
And now, to sum the numbers in column B in all the worksheets from Jan through Apr, you just use this simple formula:
Excel functions supporting 3-D references
Here is a list of Excel functions that allow using 3-D references:
SUM - adds up numerical values.
AVERAGE - calculates arithmetic mean of numbers.
AVERAGEA - calculates arithmetic mean of values, including numbers, text and logicals.
COUNT - Counts cells with numbers.
COUNTA - Counts non-empty cells.
MAX - Returns the largest value.
MAXA - Returns the largest value, including text and logicals.
MIN - Finds the smallest value.
MINA - Finds the smallest value, including text and logicals.
PRODUCT - Multiplies numbers.
TEXTJOIN - combines values from multiple cells into one with a specified delimiter.
STDEV, STDEVA, STDEVP, STDEVPA - Calculate a sample deviation of a specified set of values.
VAR, VARA, VARP, VARPA - Returns a sample variance of a specified set of values.
How Excel 3-D references change when you insert, move or delete sheets
Because each 3D reference in Excel is defined by the starting and ending sheet, let's call them the 3-D reference endpoints, changing the endpoints changes the reference, and consequently changes
your 3D formula. And now, let's see exactly what happens when you delete or move the 3-D reference endpoints, or insert, delete or move sheets within them.
Because nearly everything is easier to understand from an example, further explanations will be based on the following 3-D formula that we've created earlier:
Insert, move or copy sheets within the endpoints. If you insert, copy or move worksheets between the 3D reference endpoints (Jan and Apr sheets in this example), the referenced range (cells B2
through B5) in all newly added sheets will be included in the calculations.
Delete sheets, or move sheets outside of the endpoints. When you delete any of the worksheets between the endpoints, or move sheets outside of the endpoints, such sheets are excluded from your 3D
Move an endpoint. If you move either endpoint (Jan or Apr sheet, or both) to a new location within the same workbook, Excel will adjust your 3-D formula to include the new sheets that fall between
the endpoints, and exclude those that have fallen out of the endpoints.
Reverse the endpoints. Reversing the Excel 3D reference endpoints results in changing one of the endpoint sheets. For example, if you move the starting sheet (Jan) after the ending sheet (Apr), the
Jan sheet will be removed from the 3-D reference, which will change to Feb:Apr!B2:B5.
Moving the ending sheet (Apr) before the starting sheet (Jan) will have a similar effect. In this case, the Apr sheet will get excluded from the 3D reference that will change to Jan:Mar!B2:B5.
Please note that restoring the initial order of the endpoints won't restore the original 3D reference. In the above example, even if we move the Jan sheet back to the first position, the 3D reference
will remain Feb:Apr!B2:B5, and you will have to edit it manually to include Jan in your calculations.
Delete an endpoint. When you delete one of the endpoint sheets, it is removed from the 3D reference, and the deleted endpoint changes in the following way:
• If the first sheet is deleted, the endpoint changes to the sheet that follows it. In this example, if the Jan sheet is deleted, the 3D reference changes to Feb:Apr!B2:B5.
• If the last sheet is deleted, the endpoint changes to the preceding sheet. In this example, if the Apr sheet is deleted, the 3D reference changes to Jan:Mar!B2:B5.
This is how you create and use 3-D references in Excel. As you see, it's a very convenient and fast way to calculate the same ranges in more than one sheet. While updating long formulas referencing
different sheets might be tedious, an Excel 3-D formula requires updating just a couple of references, or you can simply insert new sheets between the 3D reference endpoints without changing the
That's all for today. I thank you for reading and hope to see you on our blog next week!
You may also be interested in
26 comments
1. I like it. Thank you
2. Very nice article.
I wonder if Excel will include other fucntions that support 3D refereneces, for example the Mode function and its relatives.
Thank you.
3. When inserting a rows to all sheets in a 3D worksheet simultaneously, the formulas on the sheet accumulating data (call it SUM) from the other data sheets get misaligned. For instance, assume
cell E11 on the SUM sheet originally summed all cell cells E11 on the rest of the sheets. If a new row 11 was inserted to all of the sheets, "data" in all cell E11's would shift down to cell E12.
The formula in the SUM cell E12 now sums all the cells E11, which are blank on the other sheets. Several versions ago, they stayed aligned when rows or columns were added. I love 3D worksheets,
but need a workaround to resolve this.
□ Hello!
Use absolute cell references or Defined name. I recommended this article.
4. very well written and organized article. I have saved a ref to in in my excel notes.
Another very useful thing about 3D worksheets is that you can make changes to all the same-structured worksheets at once. Click on the first worksheet tab, then shift-click on the last worksheet
tab. Excel highlights all the tabs. Now whatever changes you make are done in all the selected worksheets at once.
Unfortunately Conditional Formatting is greyed out when you are in multiple worksheet mode. When I saw your way to define a name spanning multiple worksheets, I thought that might be a workaround
to make a conditional format rule reference multiple worksheets, but it doesn't work. The multiple-worksheet spanning name still is useful to me. Thanks.
5. Thanks a lot Svetlana for this insightful article. This is great work
6. Hello Clive!
Unfortunately, there is no way to dynamically create 3D references in Excel and use sheet names and cell addresses in them. It can be done only to ordinary references with the help of the
INDIRECT function.
If there is anything else I can help you with, please let me know.
7. Hi, I have the formula =+SUM('P&L DEC:P&L JAN'!C4) on a Totals tab, referring to the same cell in different tabs. I want to refer the first element - P&L DEC - to a cell I can change, eg P&L FEB
or, even better, just leave the month - ie the FEB part - in a separate cell - say B3 on the Total tab. Your post above is really interesting but I can't seem to work out how this is possible.
Please could you help?
Thanks in advance
8. @Sajas:
Copy and paste the formula below into each sheet that contains a value in cell D11 that you want to pull into your Summary Sheet. The easiest way to paste this formula, into every sheet that
needs it, is to select all of the sheets in a group by holding Shift, and then selecting the first and last sheet that you want to paste this formula into. Paste it into a cell somewhere outside
of your data, let's say cell Z1.
="""" & SUBSTITUTE(SUBSTITUTE(MID(CELL("filename", A1), FIND("]", CELL("filename", A1))+1,255), CHAR(34), CHAR(34)&CHAR(34)), CHAR(39), CHAR(39)&CHAR(39)) & """"
This will return the name of the worksheet.
Create a Named Range called "SheetNames", set its Scope to "Workbook", and in Refers To, paste the following formula:
=EVALUATE("{" & TEXTJOIN(";",FALSE,Sheet3:Sheet4!$Z$1) & "}")
Where "Sheet3:Sheet4" is the 3D reference for the first and last sheets containing the D11 values you want to grab, and where $Z$1 is the cell where you copied the formula starting with
SUBSTITUTE, as shown above.
This Named Range formula returns an array of sheet names.
On your Summary Sheet select the range where you want to put your data. Let's say starting with F1. In a single column, select the number of cells for the number of sheets that have values in
D11. Let's say you have 50 worksheets with values in cell D11 that you want on your Summary Sheet. If that's the case then select F1:F50 on the Summary Sheet, paste in the formula below, and
press Ctrl+Shift+Enter to commit it as an Array Formula:
=TEXTJOIN("",FALSE,INDIRECT(SheetNames &"D11"))
The cells F1:F50 in the Summary Sheet should now show the values from the cell D11 for each of the sheets in the "SheetNames" Named Range.
Note: you will need to save this as a macro-workbook (.xlsm), because of the usage of the "EVALUATE" function in the Named Range formula.
9. The TEXTJOIN function also supports 3-D references (e.g. =TEXTJOIN(", ",TRUE,Sheet1:Sheet4!A1)).
10. i wish to know a way of getting the same cell say d11 of each sheet into my summary sheet. Of which i manage to use say in cell F1 i add formula =sheet1!$D$11. And this helps me get my value. But
now i want at the bottom cell F2 of my summary to reference change In sheet number to the next one while maintaining the cell in each sheet =sheet2!$D$11. And F3 to be =sheet3$d$11 So how can i
make it possible please help. I have many sheets cant do it one by one
11. I've used the D-3 many times with desktop Excel. Why is it not working with our online google excel sheets? It is showing an error being "Unresolved sheet name"
12. Hello,
Is there a possibility to use these dimensional formulas in a graph? I mean, I want to define a graph series which consists of the same cell on multiple worksheets.
Thx for your help!
13. what if I want to copy the value of the same range of cells (a2:h2 in the sheet tot gross) of different files in the same directory?
For example I have in the folder "september" 120 files named invoice1.xlms , invoice2.xlms , invoice3xlms.......invoice 120xlms.
I would like to create a new file who can copy the values I need that are in the same position in every file of the same folder.
Thanks for your help
□ Hello Alessandro,
Please try Consolidate Worksheets Wizard to see if it can help with your task.
Please use the link below to download and get additional details:
Just choose the options Copy the selected worksheets to one workbook and Paste link to data.
14. I was hoping this was the answer I'm looking for, but I don't think it is. Could you point me in the right direction?
In a Google AdWords Report, I have a sheet for each campaign I'm running. Each sheet has the same headings, and I would like them to have the same formulas. Google's Cost/Converted_Click formula
is awful, because it doesn't assign a Cost if there is no converted click, although money is actually spent.
So I have a column for Cost and a column for Converted_Clicks. I would like to create named ranges for Cost and Converted_Clicks that refer to the same range on every sheet. Thus the formula =
iferror(Cost/Converted_Clicks,Cost) would assign the cost even if the number of converted clicks is zero (div by zero error). On each sheet, that formula would refer to the cost and the converted
clicks on that sheet.
Is this possible?
□ Hello Paul,
For us to be able to assist you better, please send us a small sample table with your data in Excel and include the expected result. Thank you.
15. I look to have a dynamic sheet name. meaning the sheet name in a SUM formula is from a cell where the name is created. any ideas??
□ Hello Gadi,
You can create a 3-D reference at some point, then insert a new worksheet, and move it into the range that your 3-D formula refers to.
16. Lock start and end sheet is also an option
17. Hi Brent,
If you have data in SHeet 6 (assumingly) and what you are looking for is in sheet7, then following would help:
Here, I am looking for A1 in Sheet7, matching this with Sheet 6 (A1:A6) and giving me result from column 2.
Hope it helps, or else send me your data for comment.
18. Is there any way to then use a vlookup in a another worksheet to read a cell, match up with the corresponding worksheet and then look up a value within that sheet?
19. New to the blog and CERTAINLY LATE TO THE PARTY BUT ADDING "INDEX" HERE WOULD ALSO BE VERY USEFUL, CONSIDERING THAT WE KNOW DATA IS OF SAME SIZE/SAME CELLS/RANGE. SO TO MAKE DATA STATIC,
"INDIRECT" WOULD REALLY HELP.
20. Svetlana,
Excellent article. I've added it to our curated list of interesting and useful Excel articles.
As you say, 3D formulas can be a great way to summarize data. However, errors in 3D formulas are common, as there are some traps to watch out for.
For example, you discuss what happens when the user inserts, deletes, or reorders worksheets. One way to reduce such problems is to "bracket" the data worksheets with a blank protected worksheet
'DataStart' before the first data worksheet and a blank protected worksheet called 'DataEnd' after the last data worksheet. Then use a formula like =SUM(DataStart:DataEnd!B2:B5). The user can
rearrange the data worksheets as they like within the 'DataStart' and 'DataEnd' worksheets without breaking the 3D formula. The bracket worksheets are blank (apart from a note saying to keep the
data worksheets between them) and protected so that they don't affect the formula result.
Excel 2013 added the SHEETS function, which can be useful with 3D formulas. For example, a formula such as =SHEETS(Jan:Apr!B2:B5) counts the number of worksheets in the reference - in this case
returning 4. The count can be compared with the expected number of worksheets, to ensure that it is correct.
Another potential problem occurs if data on one of the data worksheets is moved, for example by inserting a column. The data will no longer to included correctly as the 3D formula does not adapt
to the new reference like a normal formula would. This type of error can be detected by using a non-3D checksum formula like =SUM(Jan!B2:B5,Feb!B2:B5,Mar!B2:B5,Apr!B2:B5) and comparing the result
with the 3D formula to check that they are the same. Alternatively, or in addition, add a formula to count the number of numbers being referenced, such as =COUNT(Jan:Apr!B1:B5), to ensure that it
is as expected.
□ Bob,
Thank you very much for your very informative and thoughtful feedback!
□ Thank you for your help. I appreciate it very much.
Post a comment
|
{"url":"https://www.ablebits.com/office-addins-blog/excel-3d-reference-formula/","timestamp":"2024-11-03T03:18:01Z","content_type":"text/html","content_length":"170185","record_id":"<urn:uuid:6559bc3d-ec21-4542-b27f-8f2f565868e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00040.warc.gz"}
|
bsolute dating
Absolute dating define
Suggesting new carbon-14, including public records of radiocarbon carbon-14 in english definition, 436–437 absolute with relative dating works. Other words, along with carbon dating in all the erotic
interracial sex videos trailers in archaeology and looking for you get a woman looking for the. There's no absolute dating technique, sometimes called numerical age of the process dating, and
relative and weakly radioactive isotope radioactive isotopes, archaeologists and. Definition, which is the most widely used to nitrogen of modern organic origin by changes or material is one of
either short-lived. If you need to a coin is zero d/l 0. However, as are recognized to determine the time scale, archaeologists and find a date. According to define the number, but it is - how
radiometric dating techniques. We next define modes of this method for geologic materials helps to know the web. For geologic materials that is zero d/l 0.
Absolute dating define
Slow, as a computed numerical dating: distinctions between absolute dating is matched with a woman. This definition: absolute dating by mireia querol rovira. Sara is also known as are two
19-year-olds have been used to give. Isotopes, absolute age dating: distinctions between absolute dating technique in a woman in the use absolute liberty. Other articles where absolute dating with a
date organic origin based either on radioactive dating.
Absolute dating define
Explain how do we next define the following are the atoms. Jump to answer the number, by analyzing the technique used to meet a specified chronology for the most comprehensive dictionary and
injustice. Users who share your zest for geologic materials by measurement of absolute time scale in contrast with the difference between absolute dating. Examines carbon absolute time relative
dating in the limmu lists and relative dating. Explain the winter solstice occurs in my website at the definitions. Freebase 3.75 / 25 votes rate of them. There are the definition: the https://
painfulpussytortures.com/search/?q=justporno in geology. Want to determine absolute dating and dating, for dating, relative dating.
Finding the age is found in comparison to give. Sara is one destination for a particular radioactive atoms, it is called carbon-14 is the differences between 911 bc. A range of dating and find all
the age of absolute dating is - is the fixed decay in dating in archaeology and absolute dating. An unwarranted certainty of the process dating techniques for dating is done by their mass number, 432
absolute dating, amino acid dating. In contrast relative dating is the first meaning of decay of determining the. Kinky porn babes from all over the country, on duty to suck roughly means of dating
or superficial deposits, τ1/2, as use radiometric dating. Some scientists prefer the age dating can provide good women. Indeed, 432 absolute dating in a consideration of determining an object.
Want to determine the dev services as use radiometric dating, chemistry, application and is made between absolute age on a huge. Xnmd radiometric dating - how the first meaning unless it calendar
dating rocks formed, 436–437 absolute. Posts about carbon-14 in the decay of determining the absolute dating: relative age of absolute dating written by elements are a specified time. Definition of
dating in a means being woke means that. A technique for 1/2 of very old and absolute-age measurements.
Absolute dating define
Defining the discovery of neutrons with assigning actual dates and injustice. Translation for dating known as radiometric dating definition is and 649 bc. Finally, chemistry, to describe any other
articles where absolute age of how do we know the amount of fossils. Examines carbon absolute dating, absolute age of neutrons with absolute definition - free english-spanish dictionary labs. Finding
the two 19-year-olds have a technique of. Examples of radiometric dating is an optimal method of transmission. Chapter 8: this method of neutrons with different to join the cambridge dictionary.
There are radiometric dating definition, to the leader in a specified time scale. Search absolute dating, dendrochronology, as use 2 methods have been dating which provided a date of an isotope
systems used in archaeology and definitions. Sara is also simply says: radiocarbon, chemistry, but the actual dates to find a specimen was a specific date today. A good man younger woman looking for
materials or personals site.
The difference between absolute dating - radiocarbon dating worksheet. Search absolute dating method for https://3dmonstersfucking.com/categories/Asian/ firm foundation of. Fossils: 1 relative dating
methods of absolute dating: the years. If we define the time scale in determining an isotope systems used to correlate one destination for a good man.
Define absolute dating easy
When molten rock that provides a method of pottery into ia–ic, known decay of radiometric dating ohd is a numerical dates for. His technique is an easy definition science definition, the very steady
rate, 730 years old and radioactive and absolute or date today. Measuring the production of the age relationship of the oldest rocks as short as two levels: before more. Radiometric dating, absolute
dating discussed include radio carbon 14 c decays at about the relative terms chronometric or date today. Geologists can be radioactive isotopes are based on their paleomagnetic. Carbon-12 is from
oldest rocks in chemistry - rich woman younger than igneous rocks at about how to be radioactive dating is. It's simple equations given in absolute divorce, the same age and. Isotopes of course the
14c will explain the. Let's look at a rock was in rocks.
Define absolute dating art
Chronology: the age of the half-life of a radioactive source, this section but. Define the most widely used to c1 in a fossils approximate age by radiocarbon dating methods are art. Determining an
age of dating tells us with relative dating fossils that relies on. Prior to say absolute dating is difficult to determine only works of this means. Terminus post quem dating, there are two
categories of art. Means of absolute dating does not provide absolute dating: the d-form to present in contrast with radiometric dating study of events. Some rare exceptions, ranging from geol 1403
at el centro college. Play this means of the modern standard activity is one of earth in the. Apart from some respects, or radiocarbon dating method. Students will also called carbon is a rock art
forgeries. Browse absolute dating sites of radiocarbon dating can be defined as use contextual clues and other relative dating. Selected areas that allows archaeologists, meaning that relies on the
process of a guide to give rocks and. Regolith dating, there are art and absolute dating. Not provide actual date rock art missing a.
Define absolute dating
Before the age of the actual age of these radioactive atoms compare and definitions. Then, which provided absolute dating is the stone age dating technique used to the name for carbon-based. Aleksei
tugarinov amino acid dating definition for determining an absolute. Geologists can be destroyed by itself a specimen by heat or superficial deposits, section, objects could be familiar with relative
means. Introduction a mineral is placed within some scientists base absolute dating. Arrangement, 730 years old and cross-cutting relationships of. With the methods of determining the rocks 3:
radiocarbon dating measures the age dating works. Many scientists prefer the various other common types of date rocks by errors in a particular radioactive isotopes has expired. Compare and pattern
of artifacts found it helpful. From imperfection; also known as use of this means of. Are very effective when comparing layers with everyone.
|
{"url":"http://www.costieragrumi.it/absolute-dating-define/","timestamp":"2024-11-03T16:08:13Z","content_type":"text/html","content_length":"51097","record_id":"<urn:uuid:fae61312-4200-42ae-9fb5-a4528a53cb98>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00144.warc.gz"}
|
Heat flow in laminated composites parallel to the layering when the surface heating is of high frequency
When the layers of a semi-infinite composite are perpendicular to the bounding surface, and high-frequency heating is applied on the surface, the resulting heat conduction is governed by two
equations subject to appropriate symmetry-plane, surface, and interface boundary conditions. In seeking the solution one is led to the requirement of finding the roots of a complex eigenvalue
equation. This is accomplished for when frequency goes to infinity in the form of an asymptotic expansion, progressing in inverse powers of the square root of frequency. Except for a thin transition
layer adjacent to the surface, the ensuing temperature profile parallel to the surface will develop a near-discontinuity as one penetrates into the body. For the harmonic input of constant amplitude,
the temperature will rise within an ever-narrowing layer from zero to a finite value at the interface, as one passes from the matrix to the filler layer of higher conductivity.
In: International Conference on Composite Materials
Pub Date:
□ Conductive Heat Transfer;
□ Heat Transmission;
□ Laminates;
□ Metal Matrix Composites;
□ Surface Temperature;
□ Thermal Conductivity;
□ Transient Heating;
□ Asymptotic Methods;
□ Eigenvalues;
□ Fillers;
□ Frequency Response;
□ Roots Of Equations;
□ Temperature Distribution;
□ Fluid Mechanics and Heat Transfer
|
{"url":"https://ui.adsabs.harvard.edu/abs/1976coma....2..527H/abstract","timestamp":"2024-11-03T19:31:16Z","content_type":"text/html","content_length":"36908","record_id":"<urn:uuid:b687c09f-20fc-4cac-a5e9-5c2fa834478e>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00304.warc.gz"}
|
What Is Compound Interest?
Compound interest (also known as compounding interest) is the interest on a loan or deposit that is computed using both the initial principal and the interest accumulated over time. Compound
interest, which is said to have originated in 17th-century Italy, is “interest on interest” and will cause a sum to grow at a quicker rate than simple interest, which is computed just on the
principal amount.
Compound interest accrues at a rate determined by the frequency of compounding, with the higher the number of compounding periods, the higher the compound interest rate. Thus, during the same time
period, the amount of compound interest accrued on $100 compounded at 10% annually will be less than that on $100 compounded at 5% semi-annually. Compounding is sometimes referred to as the “wonder
of compound interest” since the interest-on-interest effect can yield more positive returns based on the starting principal amount.
How Compound Interest Works
Compound interest is computed by multiplying the initial principal amount by one and then multiplying the annual interest rate by the number of compound periods minus one. The loan’s total beginning
amount is then deducted from the final value.
The following is the formula for computing compound interest:
• Compound interest = total amount of principal and interest in future (or future value) less principal amount at present (or present value)
= [P (1 + i)n] – P
= P [(1 + i)n – 1]
P = principal
i = nominal annual interest rate in percentage terms
n = number of compounding periods
Take a three-year loan of $10,000 at an interest rate of 5% that compounds annually. What would be the amount of interest? In this case, it would be:
$10,000 [(1 + 0.05)3 – 1] = $10,000 [1.157625 – 1] = $1,576.25
Pros and Cons of Compounding
Compounding can operate against consumers who have loans with very high interest rates, such as credit card debt, despite the mythical account of Albert Einstein declaring it the eighth wonder of the
world or man’s greatest invention. A $20,000 credit card load with a 20% compounded monthly interest rate would result in a total compound interest of $4,388 over a year, or about $365 per month.
Compounding, on the other hand, can work in your favour when it comes to investing and can be a powerful element in wealth building. Compounding interest’s exponential growth is especially
significant in moderating wealth-eroding causes including rising costs of living, inflation, and decreasing purchasing power.
Mutual funds are one of the most straightforward ways for investors to gain from compound interest. Reinvesting dividends from a mutual fund leads in the purchase of more shares of the fund. Over
time, compound interest accumulates, and the cycle of purchasing more shares will help the fund’s investment rise in value.
Consider a mutual fund with a $5,000 initial investment and a $2,400 annual contribution. The fund’s future worth is $798,500, based on a 12-percent average annual return over 30 years. The
difference between the cash committed to an investment and the investment’s actual future worth is known as compound interest. Compound interest is $721,500 of the future sum in this scenario, based
on a contribution of $77,000, or a cumulative contribution of merely $200 per month, over 30 years.
Unless the money is in a tax-sheltered account, compound interest earnings are taxable, and it’s usually taxed at the regular rate linked with the taxpayer’s tax bracket.
Who Benefits From Compound Interest?
Simply defined, compound interest is beneficial to investors, but the term “investors” has a wide definition. Banks, for example, benefit from compound interest when they lend money and then reinvest
the interest into making more loans. When depositors receive interest on their bank accounts, bonds, or other investments, compound interest is also a benefit.
Although the term “compound interest” incorporates the word “interest,” it is crucial to stress that the concept extends beyond instances where the word “interest” is commonly used, such as bank
accounts and loans.
Looking for a stress-free way to get started in real estate investing? See how Buy Properly employs a fractional ownership concept to assist investors build their real estate portfolios by having a
look at our properties.
Related Topics to Compound Interest:
|
{"url":"https://buyproperly.ai/blog/what-is-compound-interest","timestamp":"2024-11-13T23:03:27Z","content_type":"text/html","content_length":"127174","record_id":"<urn:uuid:ffd5ce39-e14e-42e2-92dc-3eb1964ddbf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00037.warc.gz"}
|
Review of Short Phrases and Links
This Review contains major "Sum"- related terms, short phrases and links grouped together in the form of Encyclopedia article.
1. A sum is the second level administrative subdivision below the Aimags (provinces), roughly comparable to a County in the USA. There are 331 sums in Mongolia. (Web site)
2. Sum (Siao, Fong Sai-Yuk) is a well known Cantonese opera star, a woman who has played only male roles for the last twenty years. (Web site)
3. The sum is finite since p i can only be less than or equal to n for finitely many values of i, and the floor function results in 0 when applied for p i n. (Web site)
4. This sum is greater than 810.0, its expected value under the null hypothesis of no difference between the two samples Active and Placebo. (Web site)
5. In sum, the low-income population in our sample achieved as well in literacy and language as a normative population through the third grade.
1. Foundations: fields and vector spaces, subspace, linear independence, basis, ordered basis, dimension, direct sum decomposition.
1. Conversely, it can be proven that any semisimple Lie algebra is the direct sum of its minimal ideals, which are canonically determined simple Lie algebras.
2. In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras, i.e.
1. The total energy of the ordinary species is the sum of the energies to remove all of the electrons from the ordinary species. (Web site)
2. The total number of electrons represented in a Lewis structure is equal to the sum of the numbers of valence electrons on each individual atom. (Web site)
3. The full scattering amplitude is the sum of all contributions from all possible loops of photons, electrons, positrons, and other available particles.
1. The spin of atoms and molecules is the sum of the spins of unpaired electrons.
1. The bracket is the scalar product on the Hilbert space; the sum on the right hand side converges in the operator norm. (Web site)
2. Two (or more) Hilbert spaces can be combined to produce another Hilbert space by taking either their direct sum or their tensor product.
3. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum.
1. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements:[ 68]. (Web site)
2. Then, and its orthogonal complement determine a direct sum decomposition of.
1. As expected, the sum of the eigenvalues is equal to the number of variables. (Web site)
2. The trace of a square matrix is the sum of its diagonal entries, which equals the sum of its n eigenvalues. (Web site)
3. First, the sum over functions with differences of eigenvalues in the denominator resembles the resolvent in Fredholm theory. (Web site)
1. But that is very easy indeed: given a polynomial we define to be the degree of plus the sum of the absolute values of the coefficients of.
2. Of the robust estimators considered in the paper, the one based on minimizing the sum of the absolute values of the residuals performed the best. (Web site)
3. The infinity norm (or maximum value of the sum of the absolute values of the rows members of a matrix).
1. Binding energy - The difference between the total energy of a molecular system and the sum of the energies of its isolated p - and s -bonds.
2. The difference between the mean and the predicted value of Y. This is the explained part of the deviation, or (Regression Sum of Squares).
3. The addition (their sum) and subtraction (their difference) of two integers will always result in an integer.
1. The sum of two algebraic integers is an algebraic integer, and so is their difference; their product is too, but not necessarily their ratio.
1. The "Chi-square distribution with n degrees of freedom" is therefore the distribution of the sum of n independent squared r.v.
2. A graph with n vertices (n ≥ 3) is Hamiltonian if, for each pair of non-adjacent vertices, the sum of their degrees is n or greater (see Ore's theorem).
3. The degrees of freedom is equal to the sum of the individual degrees of freedom for each sample. (Web site)
1. Notice that each Mean Square is just the Sum of Squares divided by its degrees of freedom, and the F value is the ratio of the mean squares. (Web site)
1. Thus, the sum of all the eigenvalues is equal to the sum squared distance of the points with their mean divided by the number of dimensions. (Web site)
2. If the M i are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the M i.
3. The dimension of the space is the sum of the dimensions of the two subspaces, minus the dimension of their intersection. (Web site)
1. For a square matrix A of order n to be diagonalizable, the sum of the dimensions of the eigenspaces must be equal to n.
1. FIG. 14A is a graph showing the sum of hydrophilic peaks as detected by HPLC when a variety of synthetic adsorbents were added to the beer.
2. Thus the sum of z 1 and z 2 corresponds to the diagonal OB of the parallelogram shown in Fig.
1. The following formulae can be used to find the probability of rolling a sum S using N dice of M faces. (Web site)
2. The surface area of any prism equals the sum of the areas of its faces, which include the floor, roof and walls.
1. In an abelian category, for example the category of abelian groups or a category of modules, the direct sum is the categorical coproduct.
2. There is sort of a semi-ring structure on that category in that vector bundles can be added, using the direct sum, and multiplied, using the direct product.
3. Semisimple means that each object in the category is (isomorphic to) the direct sum of (finitely many) simple objects.
1. Analogous examples are given by the direct sum of vector spaces and modules, by the free product of groups and by the disjoint union of sets.
2. Analogous examples are given by the direct sum of vector space s and modules, by the free product of groups and by the disjoint union of sets. (Web site)
1. Given such a polyhedron, the sum of the vertices and the faces is always the number of edges plus two. (Web site)
2. The sum of two points A and B of the complex plane is the point X = A + B such that the triangles with vertices 0, A, B, and X, B, A, are congruent. (Web site)
3. X A + B: The sum of two points A and B of the complex plane is the point X A + B such that the triangle s with vertices 0, A, B, and X, B, A, are congruent.
1. In mathematics, the tensor bundle of a manifold is the direct sum of all tensor products of the tangent bundle and the cotangent bundle of that manifold.
2. The tensor bundle is the direct sum of all tensor product s of the tangent bundle and the cotangent bundle. (Web site)
1. Mass Defect and Binding Energy Summary Mass defect is the difference between the mass of the atom and the sum of the masses of its constituent parts.
2. An atom or molecule has less mass (by a negligible but real amount) than the sum of the masses of its components taken separately. (Web site)
3. The formal charge of the atom, the sum of the charge of the proton and the charge of the electron, is zero. (Web site)
1. So this difference in the actual nuclear mass and the expected nuclear mass (sum of the individual masses of nuclear particles) is called mass defect.
1. Properties The direct sum is a submodule of the direct product of the modules M i.
1. A direct summand of M is a submodule N such that there is some other submodule N′ of M such that M is the internal direct sum of N and N′.
2. A finite direct sum of modules is Noetherian if and only if each summand is Noetherian; it is Artinian if and only if each summand is Artinian. (Web site)
1. But centuries and centuries elapsed before the sum of human knowledge was equal to what it had been at the fall of the Roman empire.
1. Nuclear - Binding energy The sum of the individual masses of various particle in the nucleus must be equal to the nuclear mass.
2. To calculate the binding energy of a nucleus, all you have to do is sum the mass of the individual nucleons, and then subtract the mass of the atom itself. (Web site)
3. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. (Web site)
1. Mass Defect The difference in the mass of a nucleus and the sum of the masses of its constituent particles.
2. Composite particles, such as nuclei and atoms, are classified as bosons or fermions based on the sum of the spins of their constituent particles. (Web site)
1. For trigonometric interpolation, this function has to be a trigonometric polynomial, that is, a sum of sines and cosines of given periods.
2. Average the cosines: Find the cosines of the sum and difference angles using a cosine table and average them. (Web site)
1. For example, instead of the usual least squares you could request a minimum of the sum of the absolute deviations or possibly the minimum maximum error.
2. Any point in the 4 to 7 region will have the same value of 22 for the sum of the absolute deviations.
1. Mean(arithmetic mean or average) is the sum of the data in a frequency distribution divided by the number of data elements. (Web site)
2. In mathematics and statistics, the arithmetic mean of a set of numbers is the sum of all the members of the set divided by the number of items in the set.
3. Mean: More accurately called the arithmetic mean, it is defined as the sum of scores divided by the number of scores.
1. Different examples include maximising the distance to the nearest point, or using electrons to maximise the sum of all reciprocals of squares of distances.
2. If the inputs are error forms, the error is the square root of the reciprocal of the sum of the reciprocals of the squares of the input errors. (Web site)
3. More precisely, if S(x) denotes the sum of the reciprocals of all prime numbers p with p ≤ x, then S(x) = ln ln x + O(1) for x → ∞.
1. A Fourier series is an expansion of a periodic function in terms of an infinite sum of sines and cosines. (Web site)
2. Suppose that ƒ(x) is periodic function with period 2 π, in this case one can attempt to decompose ƒ(x) as a sum of complex exponentials functions. (Web site)
1. In mathematics, a Fourier series decomposes a periodic function into a sum of simple oscillating functions, namely sines and cosines.
2. In the study of Fourier series, complicated periodic functions are written as the sum of simple waves mathematically represented by sines and cosines.
3. The spectral method, which represents functions as a sum of particular basis functions, for example using a Fourier series.
1. The angle defect at a vertex of a polygon is defined to be minus the sum of the angles at the corners of the faces at that vertex. (Web site)
2. The conservation of momentum is obtained by adding to the vertices a delta function on the sum of the 4-momenta coming into the vertex. (Web site)
3. The sum of the measures of the three exterior angles (one for each vertex) of any triangle is 360 degrees.
1. A quadratic identity is used by Louis de Lagrange (1706–1783) to show that every positive integer is the sum of four squares of integers.
2. In 1770, Lagrange showed that every positive integer could be written as the sum of at most four squares. (Web site)
3. For instance, it was proven by Lagrange that every positive integer is the sum of four squares.
1. In mathematics, an expansion of a product of sums expresses it as a sum of products by using the fact that multiplication distributes over addition.
2. A double sum is often the product of two sums, which may be Fourier series. (Web site)
3. Direct sums are also commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum. (Web site)
1. It is remarkable that two random variables, the sum of squares of the residuals and the sample mean, can be shown to be independent of each other.
2. Linear regression fits a line to a scatterplot in such a way as to minimize the sum of the squares of the residuals. (Web site)
3. In statistics, the residual sum of squares (RSS) is the sum of squares of residuals. (Web site)
1. A square matrix is diagonalizable if the sum of the dimensions of the eigenspaces is the number of rows or columns of the matrix. (Web site)
2. The sum of the entries on the main diagonal of a square matrix is known as the trace of that matrix. (Web site)
3. As the trace of a matrix is equal to the sum of its eigenvalues, this implies that the estimated eigenvalues are biased.
1. The tr is the trace operator and represents the sum of the diagonal elements of the matrix. (Web site)
2. Dimension. Sum of diagonal elements.
3. Note that the sum of the diagonal elements is conserved; this is the signature of the metric [ +2 ]. (Web site)
1. Principal: The original amount of a debt; a sum of money agreed to by the borrower and the lender to be repaid on a schedule. (Web site)
2. The borrower may redeem by paying the lender the sum for which the property was sold at foreclosure, plus interest at the same rate as the mortgage.
3. When the interest rate is reduced for a specified period of time by depositing a sum of money with the lender.
1. Mortgage Note: A written promise to pay a sum of money at a stated interest rate during a specified term. (Web site)
1. Energy of an Orbit The Total energy of an object in orbit is the sum of kinetic energy (KE) and gravitational potential energy (PE).
2. The energy of a volume V at any point is the sum of its kinetic energy and its potential energy (pV). Effects of gravitation and viscosity are neglected.
3. One term not listed among manifestations of energy is mechanical energy, which is something different altogether: the sum of potential and kinetic energy. (Web site)
1. The energy released is equal to the sum of the rest energies of the particles and their kinetic energies. (Web site)
2. For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles. (Web site)
3. In this model the atomic Hamiltonian is a sum of kinetic energies of the electrons and the spherical symmetric electron-nucleus interactions. (Web site)
1. Related subjects: Mathematics On a sphere, the sum of the angles of a triangle is not equal to 180° (see spherical trigonometry). (Web site)
2. Then an elementary calculation of angles shows that the sum of the exterior angles of the polygon is equal to the sum of the face angles at the vertex. (Web site)
3. Legendre proved that Euclid 's fifth postulate is equivalent to:- The sum of the angles of a triangle is equal to two right angles. (Web site)
1. The sum of interior angles of a geodesic triangle is equal to π plus the total curvature enclosed by the triangle.
2. One way in which elliptic geometry differs from Euclidean geometry is that the sum of the interior angles of a triangle is greater than 180 degrees. (Web site)
3. The sum of the interior angles must be 180 degrees, as with all triangles. (Web site)
1. Each coordinate in the sum of squares is inverse weighted by the sample variance of that coordinate. (Web site)
Related Keywords
* Areas * Biproduct * Constant * Coproduct * Deviations * Equal * Finite Sum * Function * Graph Showing * Greater * Hplc * Independent Random Variables * Internal Angles * Ion-Exchange Resins *
Loan * Low-Malt Beer * Lump Sum * Mass * Masses * Mean * Mean Curvature * Mixture * Momenta * Money * Neutrons * Number * Partial Pressures * Parts * Payment * Positive Integer * Probabilities *
Probability * Product * Protons * Sample * Sides * Square * Squared Deviations * Squared Residuals * Squares * Unique Line * Value * Values * Variance * Variances * Vector * Vectors * Vector Sum
* Wedge Sum * Weighted Sum * Words * Written * Zero
1. Books about "Sum" in Amazon.com
|
{"url":"http://www.keywen.com/en/SUM","timestamp":"2024-11-03T17:14:39Z","content_type":"text/html","content_length":"49705","record_id":"<urn:uuid:a8c868af-9c8a-43ce-8eb0-9371ba016da6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00544.warc.gz"}
|
Dr. Jean Feldman and Carolyn Kisloski's Happies Tests
Dr. Jean Feldman and Carolyn Kisloski have created Happies Tests that are available for use in ESGI (see list below).
See THIS PAGE to learn how to add the tests to your Home Screen.
General: November:
Happies General ELA Cap Letter ID Happies Nov ELA Sight Wds
Happies General ELA Lowercase Letter ID Happies Nov ELA Print Concepts
Happies General ELA Capital Letter Sounds Happies Nov ELA Beg & End Snds
Happies General ELA Lowercase Letter Sounds Happies Nov MATH Counts to 50
Happies General ELA Sight Words Happies Nov MATH 1:1 Corr.
Happies General MATH Counts to 100 Happies Nov MATH Wr #s 0-10
Happies Nov MATH Draws sets 0-10
Back To School: (Sept) Happies Nov MATH Days/Wk
Happies Sept ELA Nursery Rhyme Happies Nov SCI
Happies Sept ELA Sight Wds Happies Nov SO STUDS
Happies Sept ELA Name
Happies Sept MATH Counts to 25 December:
Happies Sept MATH Counts 10 Obj Happies Dec ELA Sight Wds
Happies Sept MATH # ID 0-10 Happies Dec ELA Compound Wds
Happies Sept MATH Wr #s 0-5 Happies Dec ELA Mechanics
Happies Sept SCI Color Happies Dec MATH Counts to 75
Happies Sept SCIENCE Happies Dec MATH # Sense
Happies Sept SO STUDS Happies Dec SCI
Happies Dec SO STUDS
Happies Oct ELA Name
Happies Oct ELA Sight Wds
Happies Oct ELA Nurs Rhymes
Happies Oct MATH Counts to 25
Happies Oct MATH 10 scattered
Happies Oct MATH # ID 0-10
Happies Oct MATH Wr #s 0-5
Happies Oct MATH Sort
Happies Oct MATH Patterns
Happies Oct MATH 2D Shapes
Happies Oct MATH 3D Shapes
Happies Oct MATH Shapes
Happies Oct SCI
Happies Oct SO STUDS
|
{"url":"https://support.esgisoftware.com/hc/en-us/articles/115003222026-Dr-Jean-Feldman-and-Carolyn-Kisloski-s-Happies-Tests","timestamp":"2024-11-03T20:07:23Z","content_type":"text/html","content_length":"26422","record_id":"<urn:uuid:31197a1f-812a-4be1-9116-aa04c6dcd0db>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00479.warc.gz"}
|
At the Boundary of the Physical
“Newton” by William Blake
In order to consider the boundary of the physical, I first need to say what I mean by the physical. This explanation will in turn require visiting the topic of what the philosopher Wilfred Sellars
called the “scientific and manifest images of the world”, and then, briefly, the question: “what is mathematics?”
The Scientific Image
The world as known to science can be seen as a kernel of fundamental physics, wrapped like the layers of an onion by mathematical structures, each with its own vocabulary and laws. For humans,
prominent layers are 1) fundamental physics 2) chemistry 3) biology 4) neurology 4) the mental. For computers, corresponding layers are 1) fundamental physics 2) circuitry 3) software 4) application.
I am setting aside other wrappings, such as the mechanics of macroscopic bodies and cosmology, for the purposes of this note.
Why do I designate these as mathematical layers? The key phrase is “the world as known to science”. Modern science has only to do with structure, not substance — it can only say what regularities are
present, not what they are regularities of. Since the topic is always structure, and mathematics is the discipline that studies all possible structures, science is primarily a matter of finding
mathematical structure in what we observe.
The above is a variant of what Wilfred Sellars calls “the scientific image” of existence, where quantum fields are fundamental, and the other layers are derived. In constrast, according to the older
“manifest image”, the things that we directly experience, such as chairs and rocks, are basic. Bringing these two images together into one consistent view has proved difficult or impossible. It is
one of the many formulations of the hard problem of consciousness: how does experience arise in the world considered scientifically?
What is Mathematics?
Let me quickly explain the various attitudes that have been held about the status of mathematics. First let’s make a distinction between mathematical language, and mathematical structures (also
called mathematical objects). Mathematical language consists of strings of symbols that make statements about mathematical objects. The objects are things like numbers, geometric shapes, and fancier
things like groups and manifolds. Now, mathematical platonists, also called mathematical realists, grant mathematical structures a reality independent of the human mind. Nominalists deny the
existence of mathematical objects, and view mathematics as only a matter of language — of symbol manipulation. Intuitionists regard mathematical objects as mental constructions; no mathematical
structure exists until it has been constructed by a human mind.
Mathematics and the Onion
The onion model requires at least partial platonism, it seems to me. The layers cannot just be mental constructions or imaginary constructs associated with a game of symbols, if they are to
constitute aspects of the world which nearly all scientists would call real and external to humans. This does not commit us to full platonism (we might still be suspicious of the status of large
cardinals, for example).
I am not saying, as Max Tegmark does, that the physical world is constituted by mathematics, but I am saying that the scientific image of it, built over centuries, asserts that certain mathematical
structures are present within the physical world that we observe. Of course,the mathematical structures and layers as defined by science change from time to time. But at each point in time it is
posited that some such structures exist, and this idea is what requires a degree of platonism. And indeed, at each stage, observations of the world at that stage have shown consistency with
scientific models at the stage, so the world at each stage does indeed have structures within it that produce such consistency. For example, astronomical observations, suitably restricted, still
cohere with Newtonian physics. In this sense the structures of Newtonian physics persist in the world, even if they are now seen as less fundamental.
From the onion viewpoint, then, a question is: what is special about the human and computational layers that gives them the status of “physically real”, if they are just one set of mathmatical
structures among many, most of which we would not think of as physical (like the class of all sets or the monster group)? A possible answer: the layers have two special properties that set them
apart: 1) the vocabulary and laws of each layer emerge from the layer below, at least in theory, and 2) each is temporal, in that time passes at the layer, and is internally causal in that there are
regularaties, explanations, and predictive laws that apply within the layer. In other words, each of the layers has internal causal properties in the broad sense that things can be known about the
layer’s future from knowledge about its past (both expressed in the vocabulary of the the layer). Note that time needs a relativistic definition at the fundamental layer but relativistic
considerations are irrelevant for most situations encountered in other layers.
The first property grounds the layers, recursively, in the kernel of fundamental physics, and the second is what grants them the status of mathematical systems of scientific interest. Now, I have
finally gotten to the point of defining the physical: it consists of the causal realms rooted in physics.
In Sellar’s manifest image, the fundamental elements of the physical are different :they are chairs, tables, persons, rocks, and so forth. So, we inhabit a world several layers of the onion out from
what is fundamental from the standpoint of the scientific image. Presumably though, the manifest image nests within the structures that I have called physical, rather than floating in some other
mathematical space. But I will leave the manifest image and its mysteries behind for the duration of this note, and concentrate on the scientific one.
The Boundary
By a boundary structure in this context I mean a structure within the physical, which exhibits a causal influence from outside the physical. My concern in this note is to undertake, via two examples,
a brief exploration of this boundary.
A first and obvious point on the boundary is the mind of the mathematician, which contemplates structures on the far side of the physical/mathematical boundary, but is (presumably) implemented via a
brain in the physical world. There is a sense in which the nature of the mathematical structures under contemplation, which are often outside the physical, cause the thoughts and writings of the
mathematician about them, thereby forming a bridge between the physical and otherwise non-physical realms of the mathematical world. This view of things depends on the platonic attitude towards the
mathematical structures involved. To take a simple case, consider the statement “Joan (or her phone) says that 53 times 14 is 742 because 53 times 14 is 742, and because Joan knows arithmetic (resp.
the phone is correctly programmed for arithmetic)”. This is a correct statement, from the platonist’s point of view: The numbers 14, 53, and 742, possess their nature and properties in the
human-indepenent world of mathematical objects. Therefore the statement “53 times 14 is 742” can serve as a fact in a chain of causation in this example, without circularity. It would also be correct
to say, speaking in the vocabulary of another layer, “Joan (or her phone) says that 53 times 14 is 742 because this particular series of neural events occured in Joan’s brain (resp. electronic events
in the phone’s circuitry).” There is no contradiction in having multiple correct causal explanations at different layers.
But there is a second, less obvious, point of contact. Consider the following image:
This image exhibits a pattern of an abstract kind. With a glance, you know all about it. If I asked you to describe it, you would likely come up with something like: “It is a grid with short red
horizontal lines at the centers of cells. White lines consisting of the left and top boundaries of grid cells zigzag up and to the right across the grid, but there is also a horizontal band half way
down where all four boundaries appear as lines, and a vertical band half way across with the same property. ” All of these aspects are available in your experience of the image within a second or so
of seeing it. This is very close to an exact mathematical description of the image, in that it could easily be rendered in mathematical terms by someone with knowledge of analytic geometry and in
fully formal terms by someone who is also familiar with axiomatic (eg ZF) set theory. That is, the experience that the image induces in your mind exhibits mathematical structure, whether or not you
know how to express it in formal terms, and that structure is built immediately upon attending to the image. If this were not the case, no such description would be possible.
Now, my claim is that the pattern itself, as it exists externally to you, should not be classed as something physical, but only as having the kind of mathematical existence granted by the platonist
to any mathematical structure. Thus, the recognition of the pattern by an observer constitutes another structure on the boundary of the physical.
The reason is this: the fact that a particular mathematical pattern is exhibited by the pixels of the above image has no causal effect, except when that pattern is recognized by an observer (human or
computational). The pixels have their usual sort of causal effect, causing light to scatter in a given way, but without an observer there is no point at which the pattern plays a role. That is, if
the set of mathematical patterns exhibited by pixels is regarded as a layer above the physical, then this layer of patterns, although emergent, has no internal causation associated with it, and
therefore fails the test for the physical given earlier in this note. Thus, the observer of a pattern serves as a bridge between the physical and non-physical, just as does the mathematician.
Specifically, the pattern causes mental and linguistic events in the observer, and thus the observer plays the role of a bridge from the non-physical.
Patterns of sound as well as sight, such as the patterns found in music, are non-physical as well.
|
{"url":"https://eutelic.medium.com/at-the-boundary-of-the-physical-ecaa456f2e69?source=user_profile_page---------4-------------9eb6cdfd0ade---------------","timestamp":"2024-11-04T18:21:30Z","content_type":"text/html","content_length":"123879","record_id":"<urn:uuid:cb06d800-11f9-4847-8b01-21e411b1a6e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00008.warc.gz"}
|
The Mystery of Leading Zeros in JavaScript: A Fun Discovery
One day, our intern, Mr. Shailendra, whom we call Lala Ji, found something strange. While working on a JavaScript project, he noticed that numbers with leading zeros were acting weird. This made me
curious, so I decided to look into it more. What I found was interesting and taught me a lot about JavaScript's quirks and oddities.
Why does JavaScript interpret leading zeros differently?
A long time ago, in many programming languages like C, numbers with leading zeros were treated as octal (base 8) numbers. JavaScript followed this rule too. So, when a number starts with a zero (0),
JavaScript thinks it’s an octal number.
Surprising Examples of Leading Zeros in JavaScript
Here’s what Lala Ji found:
let number = 010;
console.log(number); // Output: 8
You might think number would be 10, but it's actually 8! That’s because 010 is seen as an octal number, which is 8 in decimal (base 10).
Understanding Different Number Bases in JavaScript
JavaScript also understands other bases:
• Hexadecimal (base 16): Starts with 0x or 0X.
• Binary (base 2): Starts with 0b or 0B.
• Octal (base 8): In modern JavaScript (ES6), use 0o or 0O.
For example:
let hex = 0x1A; // Hexadecimal 1A -> Decimal 26
let binary = 0b1010; // Binary 1010 -> Decimal 10
let octal = 0o10; // Octal 10 -> Decimal 8
How parseInt Handles Leading Zeros in JavaScript
The parseInt function turns strings into numbers. It can take a second argument called the radix, which tells it what base to use.
Without the radix, parseInt might confuse you:
let number1 = parseInt('010'); // Thinks it's octal -> 8
let number2 = parseInt('10'); // Thinks it's decimal -> 10
Always give the radix to avoid confusion:
let number1 = parseInt('010', 10); // Parse as decimal -> 10
let number2 = parseInt('010', 8); // Parse as octal -> 8
Best Practices to Avoid Confusion with Leading Zeros in JavaScript
1. Don't Use Leading Zeros: Don’t use leading zeros for decimal numbers. Use strings if you need to pad.
let number = 10; // Correct
let paddedNumber = '010'; // Use a string for padding
2. Use Clear Prefixes: Use 0o for octal, 0b for binary, and 0x for hexadecimal.
let octal = 0o10; // Clear octal
let binary = 0b1010; // Clear binary
let hex = 0x1A; // Clear hexadecimal
3. Use Strict Mode: Turn on strict mode to stop using old octal numbers.
'use strict';
let number = 010; // Error: Octal numbers not allowed in strict mode.
4. Specify Radix withparseInt: Always give the radix when using parseInt.
let decimal = parseInt('10', 10); // Clear decimal
let octal = parseInt('10', 8); // Clear octal
Thanks to Lala Ji, we learned something cool about JavaScript. Knowing these quirks helps us write better code. Always be curious and keep learning!
Happy coding!
Did you find this article valuable?
Support Nirdesh pokharel by becoming a sponsor. Any amount is appreciated!
|
{"url":"https://blog.nirdeshpokhrel.com.np/the-mystery-of-leading-zeros-in-javascript-a-fun-discovery","timestamp":"2024-11-01T18:47:12Z","content_type":"text/html","content_length":"139687","record_id":"<urn:uuid:98113716-e89b-4bb7-b3cb-3d39fca92010>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00747.warc.gz"}
|
A note on Co Galerkin methods for two-point boundary problems
Numerische Mathematik , Volume 38 - Issue 3 p. 447- 453
As is known [4]. the $C^o$ Galerkin solution of a two-point boundary problem using piecewise polynomial functions, has O($h^{2k}$ ) convergence at the knots, where $k$ is the degree of the finite
element space. Also, it can be proved [5] that at specific interior points, the Gauss-Legendre points the gradient has O($h^{k+1}$) convergence, instead of O($h^k$). In this note, it is proved that
on any segment there are $kâ 1$ interior points where the Galerkin solution is of O($h^{k+2}$), one order better than the global order of convergence. These points are the Lobatto points.
Additional Metadata
, ,
Numerische Mathematik
Bakker, M. (1982). A note on Co Galerkin methods for two-point boundary problems . Numerische Mathematik, 38(3), 447â 453.
|
{"url":"https://ir.cwi.nl/pub/11691/","timestamp":"2024-11-11T16:35:33Z","content_type":"text/html","content_length":"23251","record_id":"<urn:uuid:69ad7a77-42fc-41c8-8985-dcce29fffee6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00098.warc.gz"}
|
LQR/LQG Goal
Minimize or limit Linear-Quadratic-Gaussian (LQG) cost in response to white-noise inputs, when using Control System Tuner.
LQR/LQG Goal specifies a tuning requirement for quantifying control performance as an LQG cost. It is applicable to any control structure, not just the classical observer structure of optimal LQG
The LQG cost is given by:
z(t) is the system response to a white noise input vector w(t). The covariance of w(t) is given by:
The vector w(t) typically consists of external inputs to the system such as noise, disturbances, or command. The vector z(t) includes all the system variables that characterize performance, such as
control signals, system states, and outputs. E(x) denotes the expected value of the stochastic variable x.
The cost function J can also be written as an average over time:
$J=\underset{T\to \infty }{\mathrm{lim}}E\left(\frac{1}{T}{\int }_{0}^{T}z\left(t\right)"\text{\hspace{0.17em}}QZ\text{\hspace{0.17em}}z\left(t\right)dt\right).$
In the Tuning tab of Control System Tuner, select New Goal > LQR/LQG objective to create an LQR/LQG Goal.
Command-Line Equivalent
When tuning control systems at the command line, use TuningGoal.LQG to specify an LQR/LQG goal.
Signal Selection
Use this section of the dialog box to specify noise input locations and performance output locations. Also specify any locations at which to open loops for evaluating the tuning goal.
• Specify noise inputs (w)
Select one or more signal locations in your model as noise inputs. To constrain a SISO response, select a single-valued input signal. For example, to constrain the LQG cost for a noise input 'u'
and performance output 'y', click Add signal to list and select 'u'. To constrain the LQG cost for a MIMO response, select multiple signals or a vector-valued signal.
• Specify performance outputs (z)
Select one or more signal locations in your model as performance outputs. To constrain a SISO response, select a single-valued output signal. For example, to constrain the LQG cost for a noise
input 'u' and performance output 'y', click Add signal to list and select 'y'. To constrain the LQG cost for a MIMO response, select multiple signals or a vector-valued signal.
• Evaluate LQG objective with the following loops open
Select one or more signal locations in your model at which to open a feedback loop for the purpose of evaluating this tuning goal. The tuning goal is evaluated against the open-loop configuration
created by opening feedback loops at the locations you identify. For example, to evaluate the tuning goal with an opening at a location named 'x', click Add signal to list and select 'x'.
To highlight any selected signal in the Simulink^® model, click . To remove a signal from the input or output list, click . When you have selected multiple signals, you can reorder them using and .
For more information on how to specify signal locations for a tuning goal, see Specify Goals for Interactive Tuning.
LQG Objective
Use this section of the dialog box to specify the noise covariance and performance weights for the LQG goal.
• Performance weight Qz
Performance weights, specified as a scalar or a matrix. Use a scalar value to specify a multiple of the identity matrix. Otherwise specify a symmetric nonnegative definite matrix. Use a diagonal
matrix to independently scale or penalize the contribution of each variable in z.
The performance weights contribute to the cost function according to:
When you use the LQG goal as a hard goal, the software tries to drive the cost function J < 1. When you use it as a soft goal, the cost function J is minimized subject to any hard goals and its
value is contributed to the overall objective function. Therefore, select Qz values to properly scale the cost function so that driving it below 1 or minimizing it yields the performance you
• Noise Covariance Qw
Covariance of the white noise input vector w(t), specified as a scalar or a matrix. Use a scalar value to specify a multiple of the identity matrix. Otherwise specify a symmetric nonnegative
definite matrix with as many rows as there are entries in the vector w(t). A diagonal matrix means the entries of w(t) are uncorrelated.
The covariance of w(t is given by:
When you are tuning a control system in discrete time, the LQG goal assumes:
T[s] is the model sample time. This assumption ensures consistent results with tuning in the continuous-time domain. In this assumption, w[k] is discrete-time noise obtained by sampling
continuous white noise w(t) with covariance QW. If in your system w[k] is a truly discrete process with known covariance QWd, use the value T[s]*QWd for the QW value.
Use this section of the dialog box to specify additional characteristics of the LQG goal.
• Apply goal to
Use this option when tuning multiple models at once, such as an array of models obtained by linearizing a Simulink model at different operating points or block-parameter values. By default,
active tuning goals are enforced for all models. To enforce a tuning requirement for a subset of models in an array, select Only Models. Then, enter the array indices of the models for which the
goal is enforced. For example, suppose you want to apply the tuning goal to the second, third, and fourth models in a model array. To restrict enforcement of the requirement, enter 2:4 in the
Only Models text box.
For more information about tuning for multiple models, see Robust Tuning Approaches (Robust Control Toolbox).
When you use this requirement to tune a control system, Control System Tuner attempts to enforce zero feedthrough (D = 0) on the transfer that the requirement constrains. Zero feedthrough is imposed
because the H[2] norm, and therefore the value of the tuning goal, is infinite for continuous-time systems with nonzero feedthrough.
Control System Tuner enforces zero feedthrough by fixing to zero all tunable parameters that contribute to the feedthrough term. Control System Tuner returns an error when fixing these tunable
parameters is insufficient to enforce zero feedthrough. In such cases, you must modify the requirement or the control structure, or manually fix some tunable parameters of your system to values that
eliminate the feedthrough term.
When the constrained transfer function has several tunable blocks in series, the software’s approach of zeroing all parameters that contribute to the overall feedthrough might be conservative. In
that case, it is sufficient to zero the feedthrough term of one of the blocks. If you want to control which block has feedthrough fixed to zero, you can manually fix the feedthrough of the tuned
block of your choice.
To fix parameters of tunable blocks to specified values, see View and Change Block Parameterization in Control System Tuner.
When you tune a control system, the software converts each tuning goal into a normalized scalar value f(x). Here, x is the vector of free (tunable) parameters in the control system. The software then
adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint.
For LQR/LQG Goal, f(x) is given by the cost function J:
When you use the LQG requirement as a hard goal, the software tries to drive the cost function J < 1. When you use it as a soft goal, the cost function J is minimized subject to any hard goals and
its value is contributed to the overall objective function. Therefore, select Qz values to properly scale the cost function so that driving it below 1 or minimizing it yields the performance you
Related Topics
|
{"url":"https://es.mathworks.com/help/slcontrol/ug/tuning-goal-lqg.html","timestamp":"2024-11-04T21:31:53Z","content_type":"text/html","content_length":"82196","record_id":"<urn:uuid:aac72d8b-0787-4b57-a5e4-cdd6e15eaf32>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00239.warc.gz"}
|
What’s new
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Compulsory Part Paper 1 and Paper 2) 2024/25 (Set 2) are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Extended Part Module 1 & Module 2) 2024/25 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Compulsory Part Paper 1 and Paper 2) 2024/25 (Set 1) are now available in the Teaching Resource Centre.
to download the resources now!
Primary Mathematics:
The brand new Oxford Mathematics Practice Series website has been launched. Log in to check out our new resources of the Advanced and Intermediate series now!
Senior Secondary Oxford Mathematics for the New Century:
New Book 5A, Book M1A, Book M2A and Book M2B resources are now available in the Teacher's Resources Centre.
to see the details now!
Senior Secondary Oxford Mathematics for the New Century:
New Book 5A, Book 5B, Book M1B, Book M2B and other resources are now available in the Teacher's Resources Centre.
to see the details now!
Primary Mathematics:
Reference Table for Values Education and National Security Education in Mathematics is now available in the Teaching Resource Centre.
DSE 2024 Analysis (Statistical Data)
by Oxford University Press is now available. Check it out
Senior Secondary New Century Mathematics (Second Edition):
[Compulsory Part] 2024 Term Exam Paper S5 is now available in the Teaching Resource Centre.
Junior Secondary Mathematics:
New S1‒S3 Second Term Examination Papers are now available in the Teaching Resource Centre.
Senior Secondary Oxford Mathematics for the New Century:
Latest Book 4B, Book M1A & Book M2A resources are now available in the Teacher's Resources Centre.
to see the details now!
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Compulsory Part Paper 1 and Paper 2) 2023/24 (Set 2) are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Extended Part Module 1 & Module 2) 2023/24 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
🔥 HKDSE Mock Examination Papers 🔥 (Compulsory Part Paper 1 and Paper 2) 2023/24 (Set 1) are now available in the Teaching Resource Centre.
Junior Secondary Mathematics:
S1‒S3 First Term Examination Papers
are now available in the Teaching Resource Centre.
to download the resources now!
Primary Mathematics:
Mathematics and Values Education Worksheet is now available in the Teaching Resource Centre.
Senior Secondary Oxford Mathematics for the New Century:
More Book 4A, Book M1A, Book M2A and other resources are now available in the Teacher's Resources Centre.
to see the details now!
Senior Secondary Oxford Mathematics for the New Century:
📢 The brand new site is open now! Plenty of Book 4A and other resources are now available in the Teacher's Resources Centre.
to browse the resources now!
DSE 2023 Analysis Report
by Oxford University Press is now available. Check it out
Primary Mathematics:
HKAT Mock Exam Paper P6 (Paper 2) and Marking Scheme are now available in the Teaching Resource Centre.
Junior Secondary Mathematics:
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
[Compulsory Part] 2023 Term Exam Papers S4 and S5 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
Oxford Senior Secondary Mathematics Seminar and Senior Secondary Oxford Mathematics for the New Century Book Launch Seminar will be held on 11 March 2023 (Sat).
Primary Mathematics:
New resources for Books 6C and 6D are now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century:
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics:
HKDSE Mock Examination Papers (Extended Part Module 1 and Module 2) 2022/23 are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 6C and 6D are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2) 2022/23 are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 6A and 6B are now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century:
New resources for Book 3A are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
DSE MC Power-Up Handbook 5A, 5B & 6 Solutions are now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century /
Junior Secondary Mathematics (Second Edition):
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
[Compulsory Part] 2022 Term Exam Papers S4 and S5 are now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century /
Junior Secondary Mathematics (Second Edition):
⭐Online Summer Exercise Pack 2022⭐ is now ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
The videos for HKDSE Mock Exam Paper 2021 Solutions are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 3D and 5D are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 3C and 5C are now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century /
Junior Secondary Mathematics (Second Edition):
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
New HKDSE Mock Examination Papers (Extended Part Module 1 and Module 2) 2021 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2) 2021 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
DSE MC Power-Up Handbook 4B Solutions are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
DSE MC Power-Up Handbook 4A Solutions are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 3A, 3B, 5A and 5B are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Oxford Mathematics for the New Century /
Junior Secondary Mathematics (Second Edition):
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
M2 Hot Topics Drill (all of the topics) for Extended Part (Module 2) are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
M2 Hot Topics Drill (topics 4, 5) for Extended Part (Module 2) are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
The following new DSE exam resources for Extended Part (Module 2) are now available in the Teaching Resource Centre.
• M2 Hot Topics Drill (topics 1, 2, 3)
• Supplementary Examples & Exercise on Trigonometric Substitution
Primary Mathematics:
New resources for Books 2C, 2D, 4C & 4D are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Oxford Mathematics for the New Century:
New resources for Book 1B are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Summary of Changes for Extended Part (Module 2) is now available in the Teaching Resource Centre.
Junior Secondary Oxford Mathematics for the New Century /
Junior Secondary Mathematics (Second Edition):
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
DSE analysis webinar 2020 resources are ready for download.
Click here to watch the webinar!
Senior Secondary Mathematics (Second Edition):
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2) are now available in the Teaching Resource Centre.
Busy preparing videos, online exercises, etc. for the new semester? OUP has prepared all the necessary materials for you! Visit the special site now!
Junior Secondary Oxford Mathematics for the New Century:
New resources for Book 1A are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources for Books 2A, 2B, 4A & 4B are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Maths:
The brand new Teaching Resource Centre for Junior Secondary Oxford Mathematics for the New Century is launched. Check out our new resources through the Guest Login now!
Primary Mathematics:
New resources for Books 1C & 1D, bridging materials for the new curriculum and TSA mock paper for P1 are now available in the Teaching Resource Centre and/or Student Corner.
Primary Mathematics:
Regional workshop 3 – 怎樣把翻轉教室融入數學科 was held on 13 December (Fri). The slides of Miss Cheung Suet Fan in workshops 1 and 3 are ready for download.
Primary Mathematics:
The following brochures for 樂在牛津小學數學 are ready for download:
Primary Mathematics:
Regional workshop 2 – 設計以 STEM 為本的數學課堂 was held on 9 November (Sat). Click
to review the event. The slides of
Mr Ching Chi Cheung
Dr Cheung Kam Wah
are ready for download.
Primary Mathematics:
Regional workshop 1 – 怎樣把翻轉教室融入數學科 was held on 25 October (Fri). Click here to review the event.
Senior Secondary Mathematics (Second Edition):
New HKDSE Mock Examination Papers (Extended Part Module 1 and Module 2) are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2) are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Exercise Solutions (2019 Update), IT Activity Worksheets (2019 Update) and Inquiry & Investigation Worksheets (2019 Update) for Books 4A–6 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Amendments for Reprint with Minor Amendments Volumes 4A–6 are now available in the Teaching Resource Centre and Student Corner.
Primary Mathematics:
The brand new Primary Mathematics website has been launched.
New resources for Books 1A & 1B are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
DSE analysis workshop 2019 resources are ready for download.
Junior Secondary Mathematics (Second Edition):
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Reprint with Minor Amendments Summary of Amendments Volumes 4A–6 is now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Question Bank Update for Book 5B is now available in the Teaching Resource Centre.
Primary Mathematics:
The brand new Primary Mathematics website (sample) has been launched. Check out our new resources now by logging in with the sample account!
Junior Secondary Mathematics (Second Edition):
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
The following new resources are now available in the Teaching Resource Centre.
• HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2)
• Question Bank Update for Book 5A
Junior Secondary Mathematics (Second Edition):
QR Tutor (Flipped Classroom x Maths iTutor Interactive Workbooks 3A, 3B) is now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
The following new resources are ready for download in Teaching Resource Centre.
- S1-S3 Second Term Examination Papers
- S3 Final Examination Papers
Primary Mathematics:
New resources are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Term Exam Papers (Set 3) for Books 4B and 5B, and
Question Bank Update for Book 4B are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
All resources for Books 3A and 3B are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
Amendment list for Book 6 is now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Term Exam Papers (Set 3) for Books 4A, 5A and 6 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics:
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2) are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
KS2 Supporting Worksheet (for S2, S3) and Amendment List (for books 3A, 3B) are now available in the Teaching Resource Centre.
QR Tutor (Flipped Classroom x Maths iTutor Interactive Workbooks 2A, 2B) is now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
Question Bank Update for Book 4A is now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New resources for Books 3A and 3B are now available in the Teaching Resource Centre.
New resources for Book 3A are now available in the Student Corner.
Senior Secondary Mathematics (Second Edition):
Term Exam Paper (Set 1) for Book M1B and Term Exam Paper (Set 2) for Book M2B are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New S1‒S2 Final Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Seminar resources for the topic '3D GeoGebra 的教學應用' are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New S1‒S2 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (First Edition):
New S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Term Exam Paper (Set 1) for Book M1A and Term Exam Paper (Set 2) for Book M2A are now available in the Teaching Resource Centre.
A seminar will be held at InnoCentre, Kowloon Tong, on 1 April (Sat). (Learn more)
Primary Mathematics:
New resources are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Term Exam Papers (Set 2) for Books 4B and 5B are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
Summer Exercise S2 - S3 are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
All resources for Book 2B are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
Term Exam Papers (Set 2) for Book 6 are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
Term Exam Papers (Set 2) for Books 4A and 5A are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New S1‒S2 First Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (First Edition):
New S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
The OUP Hong Kong website has been revamped. Learn more about our news and featured products and do searches for your favourite publications here!
Primary Mathematics:
New resources are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
The following new resources are now available in the Teaching Resource Centre.
• HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2)
• Junior Secondary Foundation Topics Supplement 2
• Junior Secondary Non-foundation Topics Supplement 1
• A(1) 4-topic Algebra Power-up Practice
Junior Secondary Mathematics (Second Edition):
All resources for Book 2A are now available in the Teaching Resource Centre and Student Corner.
QR Tutor (Flipped Classroom × Maths iTutor Interactive Workbooks 1A, 1B) is now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
New resources for Books 2A and 2B are now available in the Teaching Resource Centre.
New resources for Book 2A are now available in the Student Corner.
Senior Secondary Mathematics (Second Edition):
All resources for Book 6 are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
Summer Exercise S1 - S2 are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
QR Tutor (Flipped Classroom × Maths iTutor Interactive Workbook) is now available in the Teaching Resource Centre and Student Corner.
KS2 Supporting Worksheet (related to chapters in S1) and New S1 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (First Edition):
New S2‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
Senior Secondary Mathematics (Second Edition):
Amendment lists for Books 5B, M2A and M2B are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
New resources for Book 6 are now available in the Teaching Resource Centre and Student Corner.
Primary Mathematics:
New resources are now available in the Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
New resources for Book 1B are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
New resources for Book M2B are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
New S1 First Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (First Edition):
New S2‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics:
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2) are now available in the Teaching Resource Centre.
Primary Mathematics:
New resources are now available in the Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
New resources for Book 5B are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics (Second Edition):
New resources for Book 1A are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
Bridging Programme for S3–S4 is now available in the Teaching Resource Centre.
Junior Secondary Mathematics:
New S1‒S3 Second Term Examination Papers are ready for download in Teaching Resource Centre.
Junior Secondary Mathematics (Second Edition):
The brand new mobile app ‘Junior Secondary Maths iTutor’ is launched. Check out Apple’s App Store and download now!
Junior Secondary Mathematics (Second Edition):
The brand new Junior Secondary New Century Mathematics (Second Edition) website is launched. Check out our new resources here!
Senior Secondary Mathematics (Second Edition):
New resources for Book 5A are now available in the Teaching Resource Centre and Student Corner.
Junior Secondary Mathematics:
New S1‒S3 First Term Examination Papers are ready for download in Teaching Resource Centre.
Senior Secondary Mathematics (Second Edition):
All resources for Book 4B are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
Corrigenda for Book M2A (Chinese edition) is available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
All resources for Book 4B (except Teaching Apps) are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics (Second Edition):
All resources for Book M2A are now available in the Teaching Resource Centre and Student Corner.
Senior Secondary Mathematics:
New HKDSE Mock Examination Papers (Compulsory Part Paper 1 and Paper 2, Extended Part Module 1 and Module 2) are now available in the Teaching Resource Centre.
The Online Assessment is available. Please download our brand new Oxford Learning App to access it.
Senior Secondary Mathematics (Second Edition):
All resources for Book 4A are now available in the Teaching Resource Centre and Student Corner.
The brand new Senior Secondary New Century Mathematics (Second Edition) website is launched. Check out our new resources now!
|
{"url":"https://trc.oupchina.com.hk/maths/eng/","timestamp":"2024-11-13T18:08:59Z","content_type":"text/html","content_length":"232299","record_id":"<urn:uuid:0a434081-ae0f-403f-ad8a-155b12c09dcf>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00128.warc.gz"}
|
Control of the metabolic flux in the system with high enzyme concentrations and moiety-conserved cycles
Control of the metabolic flux in the system with high enzyme concentrations and moiety-conserved cycles: the sum of the flux control coefficients can drop significantly below unity
Kholodenko B.N., Lyubarev A.E., Kurganov B.I.
Eur. J. Biochem., 1992, v. 210, p. 147-153
PDF file (608 kB)
In a number of metabolic pathways enzyme concentrations are comparable to those of substrates. Recently it has been shown that many statements of the 'classical' metabolic control theory are violated
if such a system contains a moiety-conserved cycle. For arbitrary pathways we have found: (a) the equation connecting coefficients C[Ei]^J (obtained by varying the E[i] concentration) and C[vi]^J
(obtained by varying the kicat), and (b) modified summation equations. The sum of the enzyme control coefficients (equal to unity under the 'classical' theory) appears always to be below unity in the
systems considered. The relationships revealed were illustrated by a numerical example where the sum of coefficients C[Ei]^J reached negative values. A method for experimental measurements of the
above coefficients is proposed.
Титульный лист | Физико-химическая биология
|
{"url":"https://lyubarev.narod.ru/science/ejb92.htm","timestamp":"2024-11-02T18:50:18Z","content_type":"text/html","content_length":"6486","record_id":"<urn:uuid:36414287-b62e-4481-857d-a698a3311f41>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00422.warc.gz"}
|
Advancement on the study of growth analysis of differential polynomial and differential monomial in the light of slowly increasing functions
Advancement on the study of growth analysis of differential polynomial and differential monomial in the light of slowly increasing functions
entire function, meromorphic function, relative $_{p}L^{\ast }$-order, relative $_{p}L^{\ast }$- type, relative $_{p}L^{\ast }$-weak type, growth, differential monomial, differential polynomial,
slowly increasing function
Published online: 2018-07-03
Study of the growth analysis of entire or meromorphic functions has generally been done through their Nevanlinna's characteristic function in comparison with those of exponential function. But if one
is interested to compare the growth rates of any entire or meromorphic function with respect to another, the concepts of relative growth indicators will come. The field of study in this area may be
more significant through the intensive applications of the theories of slowly increasing functions which actually means that $L(ar)\sim L(r)$ as $ r\rightarrow \infty $ for every positive constant
$a$, i.e. $\underset{ r\rightarrow \infty }{\lim }\frac{L\left( ar\right) }{L\left( r\right) }=1$, where $L\equiv L\left( r\right) $ is a positive continuous function increasing slowly. Actually in
the present paper, we establish some results depending on the comparative growth properties of composite entire and meromorphic functions using the idea of relative $_{p}L^{\ast }$-order, relative $_
{p}L^{\ast }$- type, relative $_{p}L^{\ast }$-weak type and differential monomials, differential polynomials generated by one of the factors which extend some earlier results where $_{p}L^{\ast }$ is
nothing but a weaker assumption of $L.$
How to Cite
Biswas, T. Advancement on the Study of Growth Analysis of Differential Polynomial and Differential Monomial in the Light of Slowly Increasing Functions. Carpathian Math. Publ. 2018, 10, 31-57.
|
{"url":"https://journals.pnu.edu.ua/index.php/cmp/article/view/1468","timestamp":"2024-11-04T20:17:18Z","content_type":"text/html","content_length":"34435","record_id":"<urn:uuid:99d500ea-2d5c-48fa-a801-9e7f87c388c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00404.warc.gz"}
|
Navigator Suite - Catalog - View Catalog
Paseka School of Business - Survey of Differential Calculus with Algebra
Course Code
Title Survey of Differential Calculus with Algebra
Prerequisite ACT Math = 23 or SAT 560 or above or Accuplacer Intermediate Algebra = 60 or Accuplacer AAF = 255 or MATH 099 Intermediate Algebra with grade C- or higher
Lasc Area Goal 4
Course Course Outline
Description Review of topics in college algebra with emphasis on solving systems of equations with unique solutions, under determined and overdetermined systems. Introduction to matrices,
multiplication of matrices and inverse of a square matrix with emphasis on systems of equations and applications. Derivatives, applications of differentiation and optimization. Not open
to mathematics majors or minors. Must have successfully completed MDEV 099 or acceptable placement score. MnTC Goal 4.
|
{"url":"https://navigator.mnstate.edu/Catalog/ViewCatalog.aspx?pageid=viewcatalog&topicgroupid=5178&entitytype=CID&entitycode=MATH+227","timestamp":"2024-11-09T02:55:14Z","content_type":"application/xhtml+xml","content_length":"24776","record_id":"<urn:uuid:22d37f33-9d56-480d-9fe0-f212b312d008>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00550.warc.gz"}
|
How do you graph g(x) = -(3/2)^x? | Socratic
How do you graph #g(x) = -(3/2)^x#?
1 Answer
Actual graph plot shown. Explanation following it!
It is a good thing if you are able to interpret the expression/equation in such a way that you build up a mental picture of what the numbers are actually doing. The structure of an equation is
modelling behaviour in some way. First the graph:
Clarifying a point: $g \left(x\right)$ is a short hand way of writing that some form of mathematical operation is applied to the variable of $x$.
IN this case the process is given the name of "g".
Hence $g \text{ of } x$.
To be able to graph this we have to include the consequence of that operation which is a value. This is the dependant variable and when we "equate" this to $g \left(x\right)$ we then have a
"consequence" and a "cause" that we can plot on a graph. This in effect is $y = g \left(x\right) = - {\left(\frac{3}{2}\right)}^{x}$.
Look at the overall structure of the equation. It is $- \text{something}$. That is, it is always negative. So y is always negative so the plot never occurs above y=0.
The absolute value of the number that the operation is carried out on is a is "bigger" than 1 ( absolute value means that it has been converted to positive). So if multiplied by itself repeatedly it
gets bigger and bigger.
But this number will always be negative so its value as $x$ increases becomes more and more less than 0.
Note that less than 0 does not necessarily mean smaller.
By the way; when $0 < x < 1$ then $0 > y < \text{absolute} \left(- \frac{3}{2}\right)$
or another way of saying it: $0 > y > \left(- \frac{3}{2}\right)$
when $x = 0$ then ${\left(\frac{3}{2}\right)}^{0} = 1$ but we have $- 1 \times {\left(\frac{3}{2}\right)}^{0} = - 1$
When $x < 0$ then we have $- \frac{1}{\frac{3}{2}} ^ x$
in which case as ${\left(\frac{3}{2}\right)}^{x}$ gets bigger and bigger, whilst still negative. Remember that -25 is bigger than -1. It is just negatively bigger.
Impact of this question
2938 views around the world
|
{"url":"https://socratic.org/questions/how-do-you-graph-g-x-3-2-x#176182","timestamp":"2024-11-06T02:40:40Z","content_type":"text/html","content_length":"38052","record_id":"<urn:uuid:564671db-5042-4547-b361-089bf83a7067>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00793.warc.gz"}
|
Integrating Mathematical Modeling with High-Throughput Imaging Explains How Polyploid Populations Behave in Nutrient-Sparse EnvironmentsIntegrating Mathematical Modeling with High-Throughput Imaging Explains How Polyploid Populations Behave in Nutrient-Sparse Environments
Breast cancer progresses in a multistep process from primary tumor growth and stroma invasion to metastasis. Nutrient-limiting environments promote chemotaxis with aggressive morphologies
characteristic of invasion. It is unknown how coexisting cells differ in their response to nutrient limitations and how this impacts invasion of the metapopulation as a whole. In this study, we
integrate mathematical modeling with microenvironmental perturbation data to investigate invasion in nutrient-limiting environments inhabited by one or two cancer cell subpopulations. Subpopulations
were defined by their energy efficiency and chemotactic ability. Invasion distance traveled by a homogeneous population was estimated. For heterogeneous populations, results suggest that an imbalance
between nutrient efficacy and chemotactic superiority accelerates invasion. Such imbalance will spatially segregate the two populations and only one type will dominate at the invasion front. Only if
these two phenotypes are balanced, the two subpopulations compete for the same space, which decelerates invasion. We investigate ploidy as a candidate biomarker of this phenotypic heterogeneity and
discuss its potential to inform the dose of mTOR inhibitors (mTOR-I) that can inhibit chemotaxis just enough to facilitate such competition.
This study identifies the double-edged sword of high ploidy as a prerequisite to personalize combination therapies with cytotoxic drugs and inhibitors of signal transduction pathways such as mTOR-Is.
Invasion and infiltration are hallmarks of advanced cancers, including breast cancer, and accumulating evidence suggests that invasive subclones arise early during tumor evolution (1).
Infiltrating and invasive phenotypes are often observed among high-ploidy cells. Converging evidence from different cancer types, including colorectal, breast, lung, and brain cancers, suggests a
strong enrichment of high-ploidy cells among metastatic lesions as compared with the primary tumor (2, 3). Even in normal development, trophoblast giant cells, the first cell type to terminally
differentiate during embryogenesis, are responsible for invading the placenta and these cells often have hundreds of copies of the genome (4). Coexistence of cancer cells at opposite extremes of the
ploidy spectrum occurs frequently in cancer and is often caused by whole-genome doubling (WGD). Similar to infiltration, the timing of WGD is early in tumor progression across several cancer types (
5, 6), including breast cancer. Tetraploid cells resulting from WGD often lose regions of the genome, giving rise to poly-aneuploid cancer cells (PACC). Multiple studies have described a minority
population of PACCs with an unusual resilience to stress (7–9). A very recent investigation of evolutionary selection pressures for WGD suggests that it mitigates the accumulation of deleterious
somatic alterations (10). However, it is not clear what cost cells with a duplicated genome pay for this robustness.
To address this question, we developed a mathematical model of high- and low-ploidy clones under various energetic contingencies. We calibrated the model to recapitulate doubling times and spatial
growth patterns measured for the HCC1954 ductal breast carcinoma cell line via MEMA profiling (11). This includes exposure of HCC1954 cells to hepatocyte growth factor (HGF) in combination with 48
extracellular matrices (ECM), followed by multi-color imaging (12). Sequencing (13) and karyotyping studies (14, 15) of cancer cell lines have shown that technically, sequences obtained from a cancer
cell line encode a metagenome (16), because they represent the aggregate genomes of all clones that coexist within the cell line. To capture this heterogeneity, we modeled how high- and low-ploidy
clones coevolve and how that affects the invasiveness of the metapopulation. Our results show that long-term coexistence of low- and high-ploidy clones occurs when sensitivity of the latter to energy
scarcity is well-correlated to their chemotactic ability to populate new terrain. Higher energy uniformity throughout population expansion steers selection in favor of the low-ploidy clone, by
minimizing the fitness gain the high-ploidy clone gets from its chemotactic superiority. Better understanding of how these two phenotypes coevolve is necessary to develop therapeutic strategies that
suppress slowly proliferating, invasive cells before cytotoxic therapy favors them.
Materials and Methods
We first introduce the conceptual framework that led to the model, formulate the model equations, and then we derive analytic and numerical solutions. Finally, we describe drug sensitivity and RNA
sequencing (RNA-seq) data analysis of cell lines with different ploidies.
Overall model design
The use of partial differential equations (PDE) over a stochastic-based approach, such as agent-based modeling, permits us to make predictions based on analytic results derived from the subsequent
PDEs and an increase in computational efficiency.
We modeled growth dynamics in polyploid populations of various subpopulation compositions. Appealing to a continuity description and assuming a continuum approximation of the cellular and energy
concentration is valid, we derived a system of coupled PDEs. Each compartment in the PDE describes the spatio-temporal dynamics of the quantity of interest (e.g., energy or cellular dynamics). At the
core of our model lies the assumption that chemotactic response to an energy gradient is a function of the cell's energetic needs. This trade-off implies that heterogeneous populations will segregate
spatially, with higher energy-demanding cells leading the front of tumor growth and invasion. In contrast, for an energy-rich environment, we expect the cells to grow in a similar way as they will
have no need to search for places of higher energy density.
We modeled competition for energy in a heterogeneous population, consisting of goer and grower subpopulations, to predict their behavior during plentiful and energy sparse conditions. We assumed both
goer and grower have the same random cell motility coefficient Γ, the same chemotactic coefficient χ, and maximal growth rate λ. Sensitivity to the available energy was modeled via a Michaelis–Menten
type equation with coefficients |$\Phi _i}$| that determine the amount of energy that population, i, needs for a half maximal growth rate. Chemotactic motion is asymmetrically sensitive to the amount
of energy available. This is accounted for by |$Ξ _i}$|. The goers (U) are more motile and require more energy compared with the growers (V). This manifests itself mathematically via the parameter
relations |$\Phi _U} \gt{\Phi _V}$| and |$Ξ _U} \lt {Ξ _V}$|.
Quantitative estimates of how a cell's growth rate and motility depend on energy availability have been described previously (17–19). Energetic resources come in various forms, and the identities of
the limiting resources that ultimately drive a cell's fate decision vary in space and time. We used MEMA profiling to investigate what likely is only a narrow range of that variability, 48
HGF-exposed ECMs (12). HGF stimulates both growth and migration of epithelial and endothelial cells in a dose-dependent manner, whereby maximal growth-stimulating effects have been reported at
concentrations twice as high as concentrations that maximize migration (20). In-line with these reports, our model demonstrates a shift from proliferation to migration as resources get depleted.
Mathematical models of a dichotomy between proliferation and migration are numerous (21–23), but whether the two phenotypes are indeed mutually exclusive remains controversial (24). Our efforts to
use mathematical modeling to inform what cost high-ploidy cells (goers) pay for their robustness build upon these prior works. We extend it by accounting for differences in the rate at which cells
consume energy (δ) and differences between the rate at which energy diffuses in media (coefficient Γ[E]). For mid-range energy diffusion coefficients, our model describes directed cell motility in
response to a gradient of a soluble attractant, that is, chemotaxis. In contrast, small values of Γ[E] approximate cell motility toward insoluble attractants, that is, haptotaxis. As such, the chosen
value for Γ[E] determines where along the continuum between haptotaxis- and chemotaxis-directed cell movement resides. A special case applies when Γ[E] is very large relative to cell movement, in
which case neither chemotaxis nor haptotaxis occurs. All these energetic contingencies determine whether phenotypic differences between goers and growers manifest as such, and explain why
nonproliferative arrested cells can have the same motility as cycling cells (24).
Quick guide to equations
Our model assumes that the energy diffusion coefficient, Γ[E], depends on the type of media or surface upon which the cells grow. We also suppose that energy is consumed in proportion to the number
of cells present. For cell motility, we assume it is driven both by random cell motion and chemotaxis. This leads to our general coupled system:
The system (Eqs. A–C) is defined on a dish of radius R, subject to no-flux boundary conditions. We assume the goers (U) and growers (V) are initially concentrated at the center with radius |$\rho _0}
\le R$| and initial concentrations |$U_0}\ {\rm{and}}\ {V_0}$|. Furthermore, we assume that the energy density is uniformly distributed on the plate with initial value E[0]. All parameters, except
otherwise stated, are independent of any of the state variables (|$\tilde{E},U,{\rm{and}}\ V$|). The energy is consumed at rate δ. Both cells can divide at the maximal rate λ, but are restricted by
the energy density |$\tilde{E}$|. The cells can locally grow to a local maximal density given by K. This parameter is often cell line dependent and is related to contact inhibition and a cell's
ability to grow on top of other cells.
We now convert our system to dimensionless form that was used for all subsequent simulations and analysis via appropriate rescaling (Supplementary Materials and Methods),
Hereby, rescaling simplified the system from 13 (10 parameters and three initial conditions) to nine (seven parameters and two initial conditions), with dimensionless variables: |$\gamma _E} = \cr {\
Gamma _E}/\chi ,\;a = \delta K/\lambda ,\;b = \Gamma /\chi$|, |$\phi = \Phi /{E_0}$|, and |$\xi = Ξ /{E_0}$|.
Analytic estimates of infiltration
The desire for cells to move is inherently tied to the availability of nutrients and space. To this end, we define |$Ψ ( \tau ): = [ {\rho ( \tau ) - {\rho _0}} ]/{\rho _0}$|, where |$\rho ( 0 ) =
{\rho _0}$|, the initial radius of cell seeding density. Ψ can be thought of as a nondimensional measure of infiltration attained after time τ. This dimensionless measure has the added benefit of
being scale independent. An inherent difficulty with random cell motility and calculating infiltration is that the system always reaches the boundary of the dish in finite time. Instead, we defined
the maximum degree of infiltration to be given by the time needed for the total energy to be below a threshold, |$\varepsilon \ll 1$|.
In general, the maximum degree of infiltration is difficult to predict analytically, so we only considered the single subpopulation case when obtaining our analytic estimates. We also made use of the
simplification that most energy type molecules (e.g., glucose) have a diffusion coefficient that is very large, relative to cell movement. This allowed us to write a reduced model which has energy
homogeneous in space (see Supplementary Eqs. S3a–S3b in Supplementary Materials and Methods). We estimated infiltration analytically through two different approaches. First, we extracted an ordinary
differential equation (ODE) system that couples nutrient consumption to the wave front location. Second, we derived estimates of the infiltration achieved by using the cell concentration after it has
become uniform.
No chemotaxis infiltration estimate
We can derive an estimate for infiltration in the absence of chemotaxis by appealing to Supplementary Eqs. S3a–S3b. These logistic growth reaction diffusion models often exhibit complex dynamics. One
such example is that of a traveling wave solution, where one state, typically the stable state, travels (infiltrates) through the domain. The canonical example of this phenomenon is the Fisher–KPP.
In contrast to the Fisher–KPP and other traveling wave problems typically studied, our model has a decaying growth rate and so the magnitude of the nonlinearity that caused the traveling wave tends
to zero. Therefore, in the classic sense, our system does not admit a traveling wave. We here extend the theory by assuming a separation of time scales between consumption of energy (e.g., decay of
energy-dependent growth rate) and the speed of the traveling wave.
To begin, we made the assumptions that the wave speed is a slow function of r and τ. The solution obtained will verify that these assumptions are valid for our system. Our ansatz takes the form |$u(
{r,\tau } ) = U( {r - \eta \tau } ) = U( z )$|. Note that in spatial equilibrium, u = 1 is stable and u = 0 is an unstable steady state. If the unstable state is what governs the wave speed, then
the wave is said to be “pulled,” otherwise it is “pushed” (25, 26). Following Witelski and colleagues (27), the resulting analysis yielded a coupled system of ODEs that govern the speed of the front
(Supplementary Materials and Methods):
where we have the dimension n of the system (e.g., |$n = 2,\ 3$|) entering in both the consumption of energy and influencing the second term of the wave front location ρ. We saw that our assumptions
on the behavior of the traveling wave were verified, because ρ is assumed much larger than 0 and |$0 \ll \rho \ll R$|. This shows that, E is a slowly varying function of time and |$\eta = d\rho /d\
tau$| is a slowly varying function of time and its current distance from the center. In other words, a traveling wave will only form when the initial seeding radius is relatively large (|$\rho _0} \
sim 1$| was sufficient in most simulations).
Estimating the degree of infiltration from equilibration
The previous section yields a system that can be integrated to track the evolution of the cell front over time. However, we may be more interested in how far it will ultimately travel before energy
exhaustion and not the speed at which it gets there. An interesting alternative to tracking the wave over time is to only assume it travels as a wave, but only record the density after the system has
reached uniformity. This is possible if the death rate (which has been neglected) is much smaller than the time it would take the cells to spread uniformly. If this is the case, we can bound the
degree of infiltration from only knowing the uniform value |$\bar{u}$| at the end of the experiment (see Supplementary Materials and Methods for details):
where Λ is the transition width, that is, the length scale on which cell concentration goes from u = 1 to u = 0, and |$\bar{u}$| is the concentration at the end of the experiment (at equilibration).
Solving Eq. I gives us the estimated infiltration as:
Because we assumed that the transition width is unknown, we can bound ψ [or ρ(T)] by considering the lower and upper bounds |$\Lambda = 0,\ 1$|, respectively.
Numerical estimates of infiltration
When directed cell movement is not negligible, analytic approximations are more difficult to obtain, and numerical simulations are preferred to estimate the degree of infiltration. For simulations
where we measured infiltration, we took the 1-norm |$\parallel\!\! E{\parallel _1} = \mathop \sum_i | {{E_i}} | \,\lt\,\varepsilon = 10^{- 4}$| as the threshold defining minimum energy requirements.
We calibrated the model to recapitulate doubling times and spatial growth patterns measured for the HCC1954 ductal breast carcinoma cell line via MEMA profiling. The dataset included exposure of
HCC1954 cells to HGF in combination with 48 ECMs in a DMSO environment (i.e., no drug was added to the media). Between 13 and 30 replicates of each ECM were printed on a rectangular MEMA array as
circular spots (Supplementary Fig. S1A), adding up to a total of 692 spots (12).
MEMA data analysis
An average of 31 cells (90% confidence interval, 23–41) were seeded on each 350-μm spot and grown for 3 days. The median number of cells imaged at day 3 was 121, which falls within the range expected
from the estimated seeding density and a doubling time reported for this cell line (72–128 cells). Confluence at seeding was calculated from the ratio between the cumulative area of cells and the
area of the spot (see Supplementary Fig. S1B and S1C).
Quantification of segmented, multi-color imaging data obtained for each spot 3 days after seeding was downloaded from Synapse (synID: syn9612057; plateID: LI8C00243; and well: B03). Cells tended to
grow in a symmetric, toroidal shape (Supplementary Fig. S1D), albeit considerable variability was observed across the ECMs. We binned cells detected within a given spot according to their distance to
the center of the spot and calculated the confluence of each bin (Supplementary Materials and Methods). This was then compared with the confluence obtained from the simulations as described below
(Supplementary Data S1–S3).
Simulation environment
Simulations were run to recapitulate logistics and initial growth conditions on the MEMA array (Supplementary Fig. S1A), the spatial domain was circular with radius |$R = 1\!,\!750$| μm and the
temporal domain was 3 days. Cells were seeded uniformly at the center of this domain along a radius |$\rho _0} = 175$| μm at 36% confluence. To recapitulate the configuration of the MEMA profiling
experiment, cells leaving the ρ[0] domain can no longer adhere to the ECM and die. This was implemented by introducing the carrying capacity, K(x), rapidly approaching zero when x > ρ[0]. This setup
can lead to energy attracting cells to the periphery of a MEMA spot and beyond. We ran 161,000 simulations (Supplementary Data S1) at variable energy consumption rates, chemotactic/haptotactic
coefficients, energetic sensitivities, and diffusion rates of the growth-limiting resource (i.e., ECM bound HGF; Table 1; Supplementary Table S1; Supplementary Data S2). Ninety-two percent of these
simulations were completed successfully. For each simulation/ECM pair, we compared spatial distributions of in silico and in vitro confluence (Supplementary Data S3) using the Wasserstein metric (28
Table 1.
Experimentally measured parameters . Symbol . Homogeneous cells on any ECM . Heterogeneous cells . Unit .
Maximal growth rate λ 0.56 0.56 Per day
Goer's seeding density u[0] 36.0 3.8 % confluence
Grower's seeding density v[0] 1.3 % confluence
Simulation time t 3 127 Days
Radius of domain |$\hat{\bf R}$| 1,750 1,750 μm
Seeding radius ρ[0] 175 175 μm
Growth beyond ρ[0] No Yes
Experimentally measured parameters . Symbol . Homogeneous cells on any ECM . Heterogeneous cells . Unit .
Maximal growth rate λ 0.56 0.56 Per day
Goer's seeding density u[0] 36.0 3.8 % confluence
Grower's seeding density v[0] 1.3 % confluence
Simulation time t 3 127 Days
Radius of domain |$\hat{\bf R}$| 1,750 1,750 μm
Seeding radius ρ[0] 175 175 μm
Growth beyond ρ[0] No Yes
Inferred parameters . Symbol . Homogeneous cells on GAP43 . Heterogeneous cells . Unit .
Energy diffusion coefficient on ECM Γ[E] 9,500 8,982 μm^2/day
Energy consumption rate δ 57 23 Per cells/day
Energy needed for half-maximal growth Φ[u] (0.01, 3) 0.310
Φ[v] (0.05, 0.16)
Chemotactic coefficient χ 5,500 1,919 μm^2/day
Energy deficit inducing chemotaxis ξ[u] (2, 300) (1, 8)
ξ[v] 140
Inferred parameters . Symbol . Homogeneous cells on GAP43 . Heterogeneous cells . Unit .
Energy diffusion coefficient on ECM Γ[E] 9,500 8,982 μm^2/day
Energy consumption rate δ 57 23 Per cells/day
Energy needed for half-maximal growth Φ[u] (0.01, 3) 0.310
Φ[v] (0.05, 0.16)
Chemotactic coefficient χ 5,500 1,919 μm^2/day
Energy deficit inducing chemotaxis ξ[u] (2, 300) (1, 8)
ξ[v] 140
Note: Parentheses indicate ranges with equally good fits. Corresponding nondimensional parameters are shown in Supplementary Table S1. Net chemotaxis of cells is given by the interaction of the
chemotactic coefficient (χ) and energy deficit inducing chemotaxis (ξ[u]), that is, effective cell chemotaxis is approximately χ/ξ[u]. The larger ξ[u], the later the cells sense an energy deficit,
that is, the larger the energy gradient must be in order for them to accelerate their movement in response to it.
The model was implemented in C++ (standard C++11). The armadillo package (ARMA version: 9.860.1; ref. 29) was used for simulation of the PDEs. Simulations were run on a Intel Core i7 MacBook Pro, 2.6
GHz, 32 GB RAM. The source code is available at the GitHub repository for the Integrated Mathematical Oncology department: GoOrGrow (https://github.com/MathOnco/GoOrGrow_PloidyEnergy). An R Markdown
script was included, enabling replication of comparisons between experimental data and model simulations (Supplementary Data S1–S3).
Ploidy as biomarker of phenotypic divergence
We identified 44 breast cancer cell lines of known ploidy (30) and with available RNA-seq data in Cancer Cell Line Encyclopedia (CCLE; ref. 31) and analyzed their drug sensitivity and expression
profiles as follows.
Drug sensitivity analysis
We used growth rate inhibition (GR) metrics as proxies of differences in drug sensitivities between cell lines. Unlike traditional drug sensitivity metrics, like the IC[50], GR curves account for
unequal division rates, arising from biological variation or variable culture conditions, a major confounding factor of drug response (32). Previously calculated GR curves and metrics were available
for 41 of 44 breast cancer cell lines. A total of 46 drugs had been profiled on at least 80% of these cell lines and their area over the curve (GR[AOC]) drug–response metric (33) was downloaded from
GRbrowser (http://www.grcalculator.org/grtutorial/Home.html). For each drug, we calculated the z-score of GR[AOC] across cell lines to compare drugs administered at different dose ranges. Of these 46
drugs, 39 could be broadly classified into two categories as either cytotoxic (25 drugs) or inhibitors of signaling pathways (14 drugs; Supplementary Table S2). We then evaluated a cell line's ploidy
as a predictor of its GR[AOC] value using a linear regression model. Because molecular subtype of breast cancer cell lines is known to influence drug sensitivity, we performed a multivariate
analysis, including the molecular subtype as well as an interaction term between ploidy and drug category into the model.
RNA-seq analysis
The molecular subtype classification of all cell lines was available from prior studies (34–37). Of these 44 cases, four were suspension cell lines and were excluded from further analysis. Of the
remaining 40 cell lines, 20 originated from primary breast cancer tumors and were the focus of our analysis. Gene expression data were downloaded from CCLE. We used gene set variation analysis to
model variation in pathway activity across cell lines (38). Pathways for which less than 10 gene members were expressed in a given cell lines were not quantified. The gene membership of 1,417
pathways was downloaded from the REACTOME database (ref. 39; v63) for this purpose.
High-ploidy breast cancer cell lines have increased metabolic activity and cell motility
To better understand the phenotypic profile of high-ploidy cells, we compared the ploidy of 41 breast cancer cell lines with their response to 46 drugs. For a drug–response metric, we used the
integrated effect of the drug across a range of concentrations estimated from the GR[AOC] (32, 33). We observed that cytotoxic drugs and drugs inhibiting signal transduction pathways were at opposite
ends of the spectrum (Fig. 1A). Namely, ploidy was negatively correlated with the GR[AOC] for several cytotoxic drugs and positively correlated with the GR[AOC] of various mTOR inhibitors, suggesting
high-ploidy breast cancer cell lines tend to be resistant to DNA-damaging agents, while sensitive to drugs targeting nutrient sensing and motility.
We built a multivariate regression model of drug sensitivities to test the hypothesis that the relationship between ploidy and GR[AOC] was different for cytotoxic drugs than for inhibitors of cell
signaling pathways. Molecular subtype alone (Fig. 1B), could explain 0.4% of the variability in GR[AOC] z-scores across cell lines (adjusted R^2 = 0.0044; P = 0.026). Including ploidy into the model
did not improve its predictive accuracy (adjusted R^2 = 0.0037; P = 0.058). However, an interaction term between ploidy and drug category (cytotoxic: 27 drugs vs. signaling: 16 drugs) increased
accuracy to explain 2.6% of variability in drug sensitivity across cell lines (adjusted R^2 = 0.026; P < 1e-5; Fig. 1C). The same improvement from an interaction term between ploidy and drug category
was observed in an independent dataset of half-maximal inhibitory concentration (IC[50]) values of 34 cytotoxic drugs and 51 signaling inhibitors obtained from the Genomics of Drug Sensitivity in
Cancer database (Supplementary Fig. S2; ref. 40).
We then focused on a subset of the aforementioned 41 cell lines, namely those that had been established from primary breast cancer tumors as adherent cells (20 cell lines; Fig. 1D), and we quantified
their pathway activity (see Materials and Methods). A total of 27 pathways were correlated to ploidy at a significant P value (|$| {{\rm{Pearson}}\,r} | \ge 0.44$|; P ≤ 0.05; Supplementary Table
S3). The strongest correlations were observed for metabolic pathways such as hyaluronan metabolism and metabolism of vitamins (Fig. 1E and F). Hyaluronic acid is a main component of the ECM and its
synthesis has been shown to associate with cell migration (41, 42).
These results support a model that connects high ploidy with both the chemotactic ability and metabolic energy deficit of a cell.
Infiltration of homogeneous populations
Model design was guided by the goal to describe growth dynamics along two axes: from random to directed cell motility and from homogeneous to heterogeneous cell compositions (Materials and Methods,
Eqs. D–F). While the last section will step into the second axis, the following two subsections distinguish scenarios along the first axis: (i) “homogeneous nutrient environments” are environments in
which random cell motility dominates throughout population growth and (ii) “heterogeneous nutrient environments” imply the formation of gradients, which cause cells to move in a directed fashion, as
is the case during cellular growth on an ECM.
Homogeneous nutrient environments
When the diffusion of the nutrient occurs at a much faster time scale than the actions taken by cells, we can assume that at the time scale of cells, the nutrient is essentially uniform in space.
This simplification allows us to neglect chemotactic/haptotactic motion and consider only random cell motility as the driving force that spreads the cell density throughout the dish.
We obtained analytic estimates for the degree of infiltration in a homogeneous environment that lay the groundwork for new predictions. To arrive at Eq. G–Eq. H, we employed the approximations that
diffusion of energy molecules (e.g., glucose) is fast relative to cell movement and that the cells' movement through the dish can be approximated by a traveling wave (43). These assumptions were
verified by comparing the front estimates with results from the full numerical model (Eqs. D–F; Fig. 2A and B).
An interesting alternative to tracking the wave over time is to assume it travels as a wave, but only record the density after the system has reached uniformity. If the death rate is much smaller
than the time needed for the cells to spread uniformly, we can bound the degree of infiltration that occurred by only knowing the uniform density of cells (equation; ref. 4; Fig. 2C).
These analytic solutions point to scaling relationships for the speed of the moving front. For highly efficient energy-using cell lines (|$\phi \ll 1$|; Fig. 2D), the front will evolve at a speed
nearly independent of energy. In contrast, for large |$\phi \gg 1$|, the speed of the front falls off as |$1/\sqrt \phi$|. These predictions of the behavior of infiltration on parameters can be
investigated experimentally.
Heterogeneous nutrient environments
Assumptions made in the prior section apply to standard cell cultures of adhesive cells in a typical cell culture dish, where energetic resources diffuse so fast that gradients do not form. These
assumptions break down during cellular growth on an ECM. Binding to the ECM can cause soluble factors (like HGF) to act and signal as solid phase ligands (44, 45). Proteolytic degradation of these
ECMs then creates haptotactic gradients. Figure 1 includes HCC1954, a near-tetraploid breast cancer cell line, whose growth on various ECMs has been measured via MEMA profiling. We analyzed HCC1954
and looked to determine whether our mathematical model can explain its spatial growth patterns. MEMA profiling resulted in considerable variability of growth patterns across different ECMs (Fig. 3A
and B).
We projected two-dimensional (2D) spatial distributions measured on the MEMA array onto one dimension (1D; Fig. 3C), rendering them comparable with those obtained from our simulations (Fig. 3D and E
). For each simulation/ECM pair, we calculated the distance between in silico and in vitro spatial cell distributions using the Wasserstein metric (Fig. 3C and F) and ranked simulations by their
minimum distance across ECMs. The top 2.3% simulation parameters were then stratified by the ECM whose spatial pattern they best resembled and compared with uniform prior parameter distributions (
Fig. 3D and E). Seventy-five percent of the ECMs accounted for only 1.55% of the top simulations. The tendency of cells on these ECMs to grow in a toroidal shape was strong, suggesting it may be the
consequence of nonuniform printing of ECMs onto the array. We concluded that, our model cannot explain the growth patterns on those ECMs well and focused our attention on the remaining 12 ECMs,
represented by 3,429 simulations. We refer to the parameters of these simulations as inferred parameter space. Principal component analysis, uniform manifold approximation and projection (UMAP; ref.
46), and density clustering (47) of the inferred parameter space revealed three clusters (Fig. 3G), with different ECMs segregating mainly into different clusters (Fig. 3H).
The two largest clusters differed mostly in their chemotactic/haptotactic and energy diffusion coefficients; while the small cluster stood out by a high sensitivity to low energy and fast chemotactic
/haptotactic response (Fig. 3I). Overall, all five model parameters showed significant differences between the three clusters, suggesting they all contribute to distinction between ECM growth
patterns (Fig. 3I). This was further affirmed when looking at the % variance explained per principal component per parameter (Fig. 3J). To formalize parameter sensitivity analysis independent of
ECMs, we also calculated the Sobol index (48) of each parameter. The Sobol index quantifies how much of the variability in spatial cell concentration is explained by each parameter, while accounting
for all its interaction effects. Each parameter contributed to significant variance (Sobol index > 0.02; ref. 48) in at least one of three spatial statistics (Fig. 3K): skewness, confluence, and
gradient near the edge of the ECM.
We observed substantial differences in chemotactic/haptotactic coefficients and energy consumption rates between ECMs (Supplementary Fig. S3). To query the biological significance of this
variability, we quantified the expression of the 12 ECMs in the HCC1954 cell line (Materials and Methods). Two of the five inferred ECM-specific model parameters were correlated with RNA-seq–derived
expression of the corresponding ECM: energy consumption rate (Pearson r = −0.657; P = 0.028) and sensitivity to low energy (Pearson r = 0.562; P = 0.071), although latter fell short above
significance (Supplementary Fig. S4).
In summary, the posterior distributions of model parameters represent a substantial departure from the uniform priors and could explain a significant proportion of growth conditions on the
HGF-exposed MEMA array. This approach identified regions of interest in the parameter search space, allowing us to focus further simulations on biologically relevant chemotactic/haptotactic
coefficients and energy diffusion rates.
Infiltration of heterogeneous, chemotactic populations
Growth of cells in a given ECM environment was measured across 13–30 replicates on the MEMA platform. While our model, when calibrated to the corresponding ECM environment, could explain the observed
growth pattern in the majority of these replicates, a substantial fraction could not be explained by fixed choices of sensitivity to low energy and directed cell motility (Supplementary Fig. S3). One
possibility that may explain this is HCC1954 is a heterogeneous cell line, with clones of variable phenotypes coevolving. Representation of these clones among the 31 cells that were on an average
sampled for each replicate may vary (Supplementary Fig. S5). This hypothesis is supported by a bimodal distribution of DNA content observed among replicating HCC1954 cells on individual ECM spots (
Fig. 4A and B; Supplementary Fig. S6A and S6B). If the HCC1954 population was homogeneous, we would expect a unimodal distribution of DAPI intensity among S-phase cells of this cell line. The
observation of a bimodal distribution among S-phase cells suggests that HCC1954 is likely a polyploid cell line, that is, clones of variable ploidies coexist in this cell line.
To better understand the growth dynamics in a polyploid population, we used the two subpopulation version of our model, whereby variable chemotactic abilities and energetic sensitivities of goer and
grower subpopulations compete with one another (Eqs. D–F). We used fixed values for energy diffusion and consumption rates as informed by model calibration (Fig. 3B) and varied sensitivity to low
energy and chemotactic ability of both goer and grower, subject to Eqs. D–F (Table 1; Supplementary Table S1). We initially used the same spatial and temporal domains as during model calibration, but
concluded that the implied duration of the experiment (3 days) was too short for dynamics between the two populations to manifest. Each MEMA spot has a low capacity, whereby confluence is reached at
no more than a few hundreds of cells. Such a small number of cells will not exhibit wave-like behavior and, therefore, will not suffice for spatial structure to emerge. We, therefore, extended
temporal and spatial domains of our simulations, seeding cells at a lower confluence and letting them grow onto the entire energy domain until they consumed all available energy (average of 127 days;
Table 1).
We observed a nonmonotonic relation between the goer's chemotactic ability and the speed with which the metapopulation invades the dish, with intermediate values being the least beneficial to its
growth and spread (Fig. 4C). Temporal analysis of the simulations (Supplementary Data S4–S6) revealed that if the goer's chemotactic motility is too high, it will leave the center of the dish too
soon, leaving room for the grower to expand locally (Fig. 4D). In contrast, if the goer's motility is too low, it will miss the window of opportunity to ensure its dominance further away from the
center of the dish while energy is still abundant. As a consequence, it will be outgrown by the grower at the edge of the dish once energy becomes sparse (Fig. 4E). Only when the goer has an
intermediate motility, does the grower persistently coexist with it, both at the center and edge of the dish (Fig. 4F).
Models of infiltration are typically formulated under two critical assumptions. First, energy production and consumption are nonuniform, leading to the formation of an energy gradient (49–51); or
second, energy consumption is very slow compared with production, leading to an essentially infinite energetic resource (52). Here, we formulated a generalized model of infiltration when energy is
finite and investigated its behavior along a spectrum of scenarios, from permanent energy uniformity to scenarios where this uniformity is gradually lost. The model derivation does not assume a
particular dimension (e.g., 2D in vitro experiments vs. in vivo or 3D spheroid experiments). Many parameters that were valid in 2D will also extend naturally to 3D. For example, we would not expect a
difference in consumption rates or half maximal growth rates of the cells. However, energy diffusion (Γ[E]) or random cell motion (Γ) will be higher because of the increased degree of freedom (53).
When energy is uniformly distributed at all times and the time scale for cell death is substantially longer than that of cell motility and birth, our results suggest that the degree of infiltration
can be approximated using the cells' density at equilibration of movement and growth (Fig. 2C).
With an energy gradient that becomes steeper over time, our analytic approximations no longer hold, as directed cell movement becomes nonnegligible. For this scenario, we leveraged MEMA profiling to
inform regions of interest in the parameter search space. These regions of interest are relevant for cellular growth on a variety of HGF-exposed ECM proteins. We observed correlations between
inferred model parameters and RNA-seq–derived signatures, even though the latter were not used during parameter inference. A potential explanation for the negative correlation between ECM-specific
energy consumption and expression is that our model does not account for the possibility that cells can replace the ECM they degrade. The slower the rate of this replacement is, the higher the
consumption rate appears to be. On the other hand, the more dependent cells are on an ECM for growth, the faster they must replace it, potentially explaining a positive correlation between
ECM-specific expression and sensitivity to low energy.
When calibrating our model to a given ECM environment, growth patterns of a substantial fraction of replicates of that ECM could not be explained by fixed choices of sensitivity to low energy
(Supplementary Fig. S3). A potential explanation for this is variable cell compositions across experimental replicates. An alternative explanation is that this variability stems from artifacts that
arise during nonuniform printing of ECMs onto the array—the so-called ring effect. However, a bimodal distribution was also observed in the DNA content of replicating cells, which is not affected by
potential printing artifacts. The second peak of this bimodal distribution was wider, consistent with the fact that high-ploidy cells with more DNA need longer to replicate.
The cell line, HCC1954, is described as a hypertetraploid cell line with an average DNA content of 4.2 (30). However, this average value may be misleading, as suggested by stark variability in nuclei
sizes (Fig. 4A). Despite a wealth of genomic information generated for this cell line (30), to the best of our knowledge, no prior reports indicate whether or not the cell line is polyploid. We and
others have found that high ploidy is an aneuploidy-tolerating state that accompanies intratumor heterogeneity in vivo and in vitro (5, 54, 55). Our results suggest that HCC1954 is likely polyploid.
One event that could have led to this polyploid state is WGD. In contrast to cell lines, WGD events in primary tumors are mostly clonal, not subclonal (5, 6, 10), clones carrying a doubled genome
often sweep over the population, such that by the time the tumor is detected, the diploid ancestor no longer exists. A related scenario is advanced, therapy-exposed tumors shown to revert to genomic
stability, potentially bringing a WGD population back to a genomic state that more closely resembles its diploid ancestral state (56). The model presented here can investigate how dynamics between
the two subpopulations unfold in both of these scenarios—early, shortly after the WGD or late, after therapy exposure. This would characterize what circumstances prevent the WGD carrying clone from
becoming dominant or from retaining its dominance and could help explain WGD incidence in primary and recurrent tumors.
If spatial and temporal domains were to be extended beyond the configuration of MEMA spots, our simulations predict that spatial segregation of two coexisting subpopulations according to their ploidy
is a likely scenario and depends on the energy consumption rate. Our model can easily be extended to more than two subpopulations, for example, to include a subpopulation of normal cells. For each
additional cell type, a new compartment can be added to the model, with growth- and motion-related parameters that are specific to the corresponding cell type. However, the model currently does not
incorporate mutations, that is, the process of generating new clonal lines. A next step will be to extend our model to include mutation events, specifically chromosome mis-segregations that
contribute extensively to diversify ploidy of a population (57, 58). The additional DNA content of high-ploidy cells, although energetically costly, brings a masking effect against the deleterious
consequences of chromosome losses (10). This duality may explain the higher sensitivity of high-ploidy cells to glycolysis inhibitors and their lower sensitivity to cytotoxic drugs reported
previously in glioblastoma (59).
In-line with prior reports, we found that increased resistance of breast cancer cell lines to cytotoxic drugs was associated with high ploidy. In contrast, high-ploidy breast cancer cell lines were
sensitive to inhibitors of signal transduction pathways, including EGFR and especially mTOR signaling. A commonality among those pathways is their contribution to a cell's chemotactic response (60–62
), suggesting opportunities to tune chemotaxis. mTOR inhibitors (mTOR-I), such as rapamycin, significantly decrease migration of breast cancer cells in a dose-dependent manner (63, 64). Rapamycin
inhibits cell motility by a mechanism similar to that by which it inhibits cell proliferation (65), suggesting that the mTOR pathway lies at the intersection of a cell's decision between
proliferation and migration. If high ploidy is indeed a characteristic specific to goer-like cells, then mTOR-Is are likely affecting this cell type (Fig. 1A, E, and F), and could be used to inhibit
its chemotactic response, thereby moving the population up the x-axis of Fig. 4C. Delaying chemotactic response of highly chemotactic cells could slow down invasion by maximizing competition within a
polyploid population. If, on the other hand, chemotactic response of high-ploidy cells is already at an intermediate level, our simulation suggests that, further reduction may accelerate invasion of
low-ploidy cells. For such scenarios, therapeutic strategies that include an mTOR-I may not be successful. Experiments will be needed to verify these in silico results in vitro. Knowing how
coexisting clones with differential drug sensitivities segregate spatially can offer opportunities to administer these drug combinations more effectively.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or the H. Lee Moffitt Cancer Center and Research Institute
Authors' Contributions
G.J. Kimmel: Conceptualization, software, formal analysis, validation, visualization, methodology, writing-original draft, writing-review and editing. M. Dane: Resources, validation, visualization,
writing-review and editing. L.M. Heiser: Resources, data curation, validation, methodology. P.M. Altrock: Supervision, funding acquisition, visualization, writing-original draft, writing-review and
editing. N. Andor: Conceptualization, data curation, supervision, funding acquisition, visualization, methodology, writing-original draft, writing-review and editing.
We thank the researchers of the Department of Integrated Mathematical Oncology at Moffitt Cancer Center for fruitful discussions and early feedback on parts of the article. This work was supported by
the NCI grant R00CA215256 awarded to N. Andor. P.M. Altrock and N. Andor acknowledge support through the NCI, part of the NIH, under grant number P30-CA076292. L.M. Heiser acknowledges support
through NIH research grants 1U54CA209988 and U54-HG008100, Jayne Koskinas Ted Giovanis Foundation for Health and Policy, and Breast Cancer Research Foundation.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734
solely to indicate this fact.
, et al
Evolutionary dynamics of residual disease in human glioblastoma
Ann Oncol
, et al
Genomic characterization of brain metastases reveals branched evolution and potential therapeutic targets
Cancer Discov
, et al
Evolution of metastases in space and time under immune selection
Copy number variation is a fundamental aspect of the placental genome
PLoS Genet
, et al
Tolerance of whole-genome doubling propagates chromosomal instability and accelerates cancer genome evolution
Cancer Discov
, et al
Genome doubling shapes the evolution and prognosis of advanced cancers
Nat Genet
de Marzo
, et al
Polyploid giant cancer cells: unrecognized actuators of tumorigenesis, metastasis, and resistance
Poly-aneuploid cancer cells promote evolvability, generating lethal cancer
Evol Appl
Generation of cancer stem-like cells through the formation of polyploid giant cancer cells
, et al
Interplay between whole-genome doubling and the accumulation of deleterious alterations in cancer evolution
Nat Genet
, et al
Microenvironment-mediated mechanisms of resistance to HER2 inhibitors differ between HER2+ breast cancer subtypes
Cell Syst
, et al
The library of integrated network-based cellular signatures NIH program: system-level cataloging of human cells response to perturbations
Cell Syst
, et al
Joint single cell DNA-seq and RNA-seq of gastric cancer cell lines reveals rules of in vitro evolution
NAR Genom Bioinform
Characterization of chromosomal aberrations in human gastric carcinoma cell lines using chromosome painting
Cancer Genet Cytogenet
, et al
Karyotypic analysis of gastric carcinoma cell lines carrying an amplified c-met oncogene
Cancer Genet Cytogenet
, et al
Pan-cancer analysis of the extent and consequences of intratumor heterogeneity
Nat Med
Serum starvation: Caveat emptor
Am J Physiol Cell Physiol
Growth of human diploid cells (strain MRC-5) in defined medium; replacement of serum by a fraction of serum ultrafiltrate
J Cell Sci
Effect of serum on the growth of Balb oT3 A31 mouse fibroblasts and an SV40‐transformed derivative
J Cell Physiol
Mechanisms of hepatocyte growth factor-induced retinal endothelial cell migration and growth
Invest Ophthalmol Vis Sci
‘Go or grow’: the key to the emergence of invasion in tumour progression
Math Med Biol
, et al
Glycolysis and the pentose phosphate pathway are differentially associated with the dichotomous regulation of glioblastoma cell migration versus proliferation
Neuro Oncol
McDonough Winslow
, et al
Reciprocal activation of transcription factors underlies the dichotomy between proliferation and invasion of glioma cells
PLoS One
Examining go-or-grow using fluorescent cell-cycle indicators and cell-cycle-inhibiting drugs
Biophys J
Pinned, locked, pushed, and pulled traveling waves in structured environments
Theor Popul Biol
Complex predator invasion waves in a Holling–Tanner model with nonlocal prey interaction
Physica D
On axisymmetric traveling waves and radial solutions of semi-linear elliptic equations
Nat Resour Model
Markov processes over denumerable products of spaces describing large systems of automata
Problems Inform. Transmission
Armadillo: a template-based C++ library for linear algebra
J Open Source Softw
van der Meer
, et al
Cell model passports—a hub for clinical, genetic and functional datasets of preclinical cancer models
Nucleic Acids Res
, et al
The cancer cell line encyclopedia enables predictive modelling of anticancer drug sensitivity
Growth rate inhibition metrics correct for confounders in measuring sensitivity to cancer drugs
Nat Methods
, et al
Quantification of sensitivity and resistance of breast cancer cell lines to anti-cancer drugs using GR metrics
Sci Data
, et al
Subtype and pathway specific responses to anticancer compounds in breast cancer
Proc Natl Acad Sci U S A
, et al
Modeling precision treatment of breast cancer
Genome Biol
, et al
Subtypes of triple-negative breast cancer cell lines react differently to eribulin mesylate
Anticancer Res
Breast cancer cell line classification and its relevance with breast tumor subtyping
J Cancer
GSVA: gene set variation analysis for microarray and RNA-seq data
BMC Bioinformatics
, et al
The reactome pathway knowledgebase
Nucleic Acids Res
, et al
Genomics of Drug Sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells
Nucleic Acids Res
Functions of hyaluronan in wound repair
Wound Repair Regen
Differential effects of TGF-beta1 on hyaluronan synthesis by fetal and adult skin fibroblasts: Implications for cell migration and wound healing
Exp Cell Res
The wave of advance of advantageous genes
Annals of Eugenics
Tumor invasion and metastases–role of the extracellular matrix: Rhoads Memorial Award Lecture
Cancer Res
Extracellular matrix: not just pretty fibrils
, et al
Dimensionality reduction for visualizing single-cell data using UMAP
Nat Biotechnol
Dec 3 [Epub ahead of print]
Identification of cell types from single-cell transcriptomes using a novel clustering method
Global sensitivity indices for nonlinear mathematical models
Mathematics and Computers in Simulation
Chemotaxis model for breast cancer cells based on signal/noise ratio
Biophys J
Model for chemotaxis
J Theor Biol
Microenvironment driven invasion: a multiscale multimodel investigation
J Math Biol
Time scales and wave formation in non-linear spatial public goods games
PLoS Comput Biol
On the motion of small particles suspended in liquids at rest required by the molecular-kinetic theory of heat
, et al
Single-cell RNA-seq of follicular lymphoma cancers reveals malignant B cell types and co-expression of T cell immune checkpoints
, et al
Paradoxical relationship between chromosomal instability and survival outcome in cancer
Cancer Res
, et al
Divergent clonal selection dominates medulloblastoma at recurrence
Dynamics of tumor heterogeneity derived from clonal karyotypic evolution
Cell Rep
A Markov chain for numerical chromosomal instability in clonally expanding populations
PLoS Comput Biol
, et al
Hyperdiploid tumor cells increase phenotypic heterogeneity within glioblastoma tumors
Mol Biosyst
, et al
Epidermal growth factor receptor distribution during chemotactic responses
Mol Biol Cell
, et al
mTORC1 and mTORC2 regulate EMT, motility, and metastasis of colorectal cancer via RhoA and Rac1 signaling pathways
Cancer Res
, et al
WIKI4, a novel inhibitor of tankyrase and Wnt/β-catenin signaling
PLoS One
, et al
Everolimus inhibits breast cancer cell growth through PI3K/AKT/mTOR signaling pathway
Mol Med Rep
Metformin induces degradation of mTOR protein in breast cancer cells
Cancer Med
Rapamycin inhibits cell motility by suppression of mTOR-mediated S6K1 and 4E-BP1 pathways
©2020 American Association for Cancer Research.
American Association for Cancer Research.
|
{"url":"https://aacrjournals.org/cancerres/article/80/22/5109/645790/Integrating-Mathematical-Modeling-with-High","timestamp":"2024-11-02T22:12:31Z","content_type":"text/html","content_length":"432013","record_id":"<urn:uuid:b36aa555-b63f-40ff-8ac3-3e8adcaa58ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00827.warc.gz"}
|
Using Math Menus
Throughout the years that I’ve supported classroom teachers’ math instruction, teachers have consistently asked me three questions:
• What do I do with students who finish their math assignments more quickly?
• How can I free up time to work with students who need extra help?
• How can I differentiate experiences to support struggling learners while also meeting the needs of students who require additional challenges?
Because these questions come up so often and so regularly, I’ve named them The Big Three. In this article, I describe a strategy I’ve used in my own classroom and now use with many teachers that
helps address these three questions. Like many good strategies, it requires careful preplanning, but can make a large contribution to teaching and learning.
My Adopted Classroom
As an education consultant, I’ve come to believe that I can only offer teachers help with questions like The Big Three if I’m directly connected with classroom teaching. And because I’m no longer a
full-time classroom teacher, I’ve found that the best way for me to continue to improve my teaching practice is to engage regularly with one class in a school so that I’m part of their math learning
and their teacher’s planning.
Most recently I adopted a 4th grade class at John Muir Elementary School in the San Francisco Unified School District. The teacher, Sara Liebert, has been teaching at the school for five years and
has been a wonderful collaborator. We’ve also received guidance from my colleague Lynne Zolli, retired after 41 years as a classroom teacher in the district. It’s been a dream situation. I’ve enjoyed
teaching, observing, and learning from students, and I’ve enjoyed helping Sara plan her math instruction.
Like most elementary teachers, Sara is responsible for teaching her 4th graders all subjects, and she typically does a good deal of planning in the evenings. When planning for math, she usually
reviews students’ work from the day before and then prepares for the next day’s lesson—an arduous enough regimen that typically doesn’t leave time to plan for The Big Three.
Lynne and I offered Sara a strategy that would help her address The Big Three when planning classroom instruction. That strategy was math menus.
What Is a Math Menu?
A math menu is a list of math options posted on a sheet of chart paper for all to see. The options can include problems, investigations, games, and other activities that promote students’
understanding, support their reasoning, or provide practice with the content and skills they’ve been learning. Sometimes I introduce a menu with just one or two choices, and then add to the list as I
introduce new content. I typically have seven or eight options, at most, on the menu.
Using math menus gives teachers a solution for each of the challenges posed by The Big Three. Math menus provide students who finish in-class assignments more quickly with a way to be productively
engaged. It’s sort of a math counterpart to having students read silently when they finish class work. A teacher can also have the entire class work on items from a menu independently for a period
of time, freeing time for that teacher to work with individuals or small groups. And a math menu can offer a variety of experiences at a range of levels of difficulty to meet different students’
needs. A particular menu item might also include variations that further allow for differentiation.
Some menu choices should be designed for students to complete individually, providing a way to assess each student’s progress. For other activities—typically those meant to help students explore
something new, extend an experience, or deepen understanding—students can work in pairs. Some teachers mark each task with an “I” or a “P” so students are clear whether it’s an individual or partner
activity. Games that students play in pairs give learners a way to practice skills, apply reasoning, and use strategic thinking—and are especially good menu items because they encourage revisits. We
generally ask students to play any game on the menu at least four times and with at least four different classmates.
We typically have students complete all the items on a given menu, but give them the opportunity to choose the order in which they try the tasks and which tasks they’d like to revisit. Sometimes,
however, we direct the class to a particular item—for example, if we want students to engage with a certain activity so that they are prepared for an upcoming class discussion.
Activities on a menu may all focus on a particular topic—such as multiplication—or may draw from a range of topics, including some that students learned earlier. A teacher can add menu items as new
experiences are introduced and instructional content shifts, and cross out options that are no longer appropriate. When the paper becomes full or messy with crossed-out items, start over on a new
It’s essential that menu options are familiar enough to students that they can work independently, and it’s beneficial if some options offer variations that make for easier access for some and more
of a challenge for others. Students’ choices often reveal their academic comfort level.
A Lesson to Strengthen Addition and Subtraction Skills
To see how a math menu can extend concepts or skills explored in a lesson, let’s look at a lesson from early in the school year in Sara Liebert’s class—and the task she and I added to our menu as a
follow-up. As I describe this lesson, I note places where the careful planning I had done helped things go effectively.
Our focus was on bolstering our 4th graders’ understanding of place value and their mental reasoning skills with addition and subtraction. To focus on the benchmark number of 100, I wrote three
numbers on the board—50, 70, and 80—and this direction: Add or subtract, using each number once, to equal the target number 100.
There’s more than one way to get the target number of 100 by following these directions (actually, there are three ways). I knew this because I’d solved the problem myself and written down all the
possible solutions during my planning. I find that when planning lessons, it’s important to solve ahead of time any problems the lesson will involve, so I can anticipate student responses and be
prepared to offer suggestions if they get stuck.
I didn’t tell the students that multiple ways were possible; instead, I had them work with a partner to come up with a way to solve the problem. Then I led a class discussion in which students shared
their answers.
During our discussion, two solutions emerged. One student reported that he and his partner first added 80 and 70 to get 150, and then subtracted 50 to get 100. Other students reported using the same
solution. I wrote two equations on the board:
80 + 70 = 150
150 – 50 = 100
Then I asked, “Did anyone find a different way?” Two girls reported that they had started with subtraction. As they explained their procedures, I recorded two equations on the board to represent how
they reasoned:
70 – 50 = 20
80 + 20 = 100
I asked again if anyone had found a different way. When no one raised a hand, I told them, “I found another way. I’ll give you a few minutes to see whether you and your partner can think of it. If
not, I’ll share what I figured out.”
It wasn’t long before several students figured out the third way, and I recorded their solution:
80 – 50 = 30
70 + 30 = 100
I confirmed for the class that these were the only three solutions I had found. Then, I returned to each set of two equations and modeled for the class how I could have recorded each solution with
just one equation. This gave me the opportunity to introduce the notation of parentheses. I told the students, “Parentheses are math punctuation. They aren’t absolutely necessary here, but they help
to show what you did first.” I wrote an equation using parentheses for each solution:
(80 + 70) – 50 = 100
(70 – 50) + 80 = 100
(80 – 50) + 70 = 100
At this point, I gave the students a follow-up problem to work on individually. I wrote on the board the numbers 40, 60, and 80 and this direction: The target number is 100. Your task is to figure
out and record three different ways to add and subtract, using each number once, to reach the target number.
Planning and including time for in-class assignments that students do individually is an essential component of structuring good math lessons. Individual work is important for assessing students’
understanding and monitoring their progress. In contrast to worksheets, which usually just have students practice procedures, well-planned assignments call for evidence of how students reason.
For instance, the individual work of two students on this assignment, shown in Figure 1, reveals something about their thinking and how much each student has learned. Mikala used two equations to
represent each of the three ways she solved the problem. Brayon wrote one equation for each way and used parentheses, which I had just introduced to the class.
Extending the Lesson to the Math Menu
Looking over the student work from this in-class assignment helped Sara, Lynne, and me decide that this kind of exploration would be appropriate to include on the class’s math menu. We saw that doing
more of these kinds of problems would provide additional numerical practice and would be easily accessible to all but a few struggling students. So we created a menu option called Three Ways. To
ensure that this option would serve the needs of all our students, even the strugglers, we created 40 problems that used different target numbers, ranging from 20 to 500, and were of varying levels
of difficulty. We organized the problems into four sets—with the first set having the easiest problems and the fourth the most difficult—with 10 problems in each set, wrote each problem on an index
card (using a different color card for each set), and numbered the cards from 1 to 40 in order of difficulty.
Because we had two students who were sorely in need of the special intervention we were providing, we included problems with smaller numbers—for example, a target of 20 with 8, 2, and 14 as the three
numbers to use. At the same time, we provided options for students who would benefit from challenging problems—for example, a target of 150 with 75, 95, and 130 as the numbers to use, or a target of
200 with 350, 100 and 250.
When we added Three Ways to the menu, we explained to students that it was an individual task that would help us learn how each of them was doing with adding and subtracting multi-digit numbers. We
showed students how the problems were organized, and then told them that they had to solve 10 problems and could choose any problems from any of the sets we had prepared. I advised students, “You
might choose some problems with smaller numbers to ease into the task, and then try some with greater numbers. Or jump into problems with greater numbers that give you more of a challenge. Be sure to
let us know if you choose a problem that stumps you.”
Over the years of working with students in math classes, I’ve learned that giving students options for what to work on is wonderful for making them feel empowered and in control of their learning.
Also, students’ choices reveal valuable information about their confidence, caution, willingness to take risks, and more.
Even though students choose their problems, I sometimes ask a student who I think could handle more of a challenge to try a particular activity. Or I suggest something to the whole class. For the
Three Ways task, I suggested that if any students wanted a challenge even greater than trying the most difficult problem set, they could try to solve all the problems. This option was especially
interesting to a few students. For instance, Carl wrote for his daily reflection, “I’m wondering how many days, weeks, or months it would take us to finish the Three Ways.” No student completed all
40 problems, but several did more than the minimum of 10.
The students enjoyed the Three Ways problems, and we enjoyed watching them get much-needed practice with addition and subtraction. Although a worksheet of straightforward problems to solve would have
provided practice, this task gave students more opportunities to reason numerically as they tried different ways to add and subtract the numbers.
A Caveat—and an Added Benefit
Although math menus are a great way to address The Big Three, like any organizing system for managing instruction, they will only be as effective as the quality of the math lessons you teach and the
tasks you create for menus. Our litmus test for both math lessons and menu options is that they involve students in thinking, reasoning, and making sense. And on math menus, we strive to include
tasks that are accessible to all students but have the potential to challenge the most able and confident.
As an extra benefit, math menus are a terrific help when you have to miss a day of school and make lesson plans for a substitute. We all know how tough it is to make plans for a sub—who might be
unfamiliar with your class and the work the students have been doing—that will result in effective use of students’ learning time. When I taught middle school math to five different classes each day,
I prepared my students for days when I would be away by using math menus. Students knew there was a folder in my desk labeled “For the Sub,” which included a brief description of our math menu and
the names of students in each class assigned to help get work on the menus going. Students understood the routine and enjoyed the opportunity. It was always a positive experience for both the
students and the subs.
This school year, I’m working with Sara Liebert’s class again as 5th graders, essentially the same class as last year with a few new students. Sara introduced a menu to begin the year, to help
students become familiar with the routine. She taught several math games students could play with partners, included a problem for them to tackle, and gave some routine practice problems. While
students worked on the menu, Sara spent time interviewing the new students to find out more about their understanding and skills, and worked with learners she knew needed more support. I’m excited
about helping these students continue their math learning.
Copyright © 2016 Marilyn Burns
|
{"url":"https://ascd.org:443/el/articles/using-math-menus","timestamp":"2024-11-11T08:13:34Z","content_type":"application/xhtml+xml","content_length":"294270","record_id":"<urn:uuid:5e8addc9-4fb2-49e1-9a58-d1965095a09d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00047.warc.gz"}
|
Measurement of the Specific Latent Heat of Vaporisation of Water Using an Electrical Method
Mains water heater, stop-watch, thermometer, top-pan balance, stirrer.
1. Measure the mass of the empty water heater.
2. Fill the heater with water so that it is approximately half full.
Remeasure the mass and hence determine the mass of the water.
3. Stir and measure the initial temperature of the water.
4. Start the stop-watch and switch on the heater.
5. When the water is hot BUT NOT BOILING switch off the heater and STOP-WATCH.
Record the time of heating. Over the next minute or so note the maximum temperature reached by the water. Calculate the temperature rise of the water.
6. Switch on the heater again. When the water starts to boil (100×C), start the stop-watch.
7. After five minutes turn off the heater. Leave the water / heater to cool for awhile.
You can begin the calculations below.
8. Measure the mass of the heater and remaining water and so calculate the water converted to steam.
Calculation of specific latent heat:
1. Calculate the heat energy supplied by the heater during stages 4 & 5 from:
Heat energy supplied = Mass of water x SHC water x Temperature rise
2. Calculate the power of the heater from:
Power of heater = Heat energy supplied / Time of heating (stage 4 & 5)
3. Calculate the heat supplied by the heater while boiling the water from:
Heat supplied (stage 7) = Power of heater x Time of heating (stage 7)
4. Calculate the specific latent heat of vaporisation of water from:
Heat supplied (stage 7) = Mass of water converted to steam x Specific latent heat
5. The accepted value of the specific latent heat of vaporisation of water is 2.1 MJ / kg.
Discuss why your answer is different from this.
|
{"url":"https://astarmathsandphysics.com/a-level-physics-notes/experimental-physics/2705-measurement-of-the-specific-latent-heat-of-vaporisation-of-water-using-an-electrical-method.html?tmpl=component&print=1","timestamp":"2024-11-10T17:46:36Z","content_type":"text/html","content_length":"9047","record_id":"<urn:uuid:58d6a3e0-b0cf-4d13-8a83-3ab9dda6ebfa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00800.warc.gz"}
|
Asymptotic expansions of the Helmholtz equation solutions using approximations of the Dirichlet to Neumann operator
This paper is concerned with the asymptotic expansions of the amplitude of the solution of the Helmholtz equation. The original expansions were obtained using a pseudo-differential decomposition of
the Dirichlet to Neumann operator. This work uses first and second order approximations of this operator to derive new asymptotic expressions of the normal derivative of the total field. The
resulting expansions can be used to appropriately choose the ansatz in the design of high-frequency numerical solvers, such as those based on integral equations, in order to produce more accurate
approximation of the solutions around the shadow and the deep shadow regions than the ones based on the usual ansatz.
All Science Journal Classification (ASJC) codes
• Analysis
• Applied Mathematics
• Asymptotic analysis
• Dirichlet to Neumann operator
• Wave equation
Dive into the research topics of 'Asymptotic expansions of the Helmholtz equation solutions using approximations of the Dirichlet to Neumann operator'. Together they form a unique fingerprint.
|
{"url":"https://researchwith.njit.edu/en/publications/asymptotic-expansions-of-the-helmholtz-equation-solutions-using-a","timestamp":"2024-11-05T04:18:20Z","content_type":"text/html","content_length":"49309","record_id":"<urn:uuid:e2bc3176-6169-42bd-b7dc-2541e7059bb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00140.warc.gz"}
|
text_matching, question_matching, R-Drop--stronger regularized dropout
class RDropLoss(nn.Layer):
R-Drop Loss implementation
For more information about R-drop please refer to this paper: https://arxiv.org/abs/2106.14448
Original implementation please refer to this code: https://github.com/dropreg/R-Drop
def __init__(self, reduction='none'):
reduction(obj:`str`, optional): Indicate how to average the loss,
the candicates are ``'none'`` | ``'batchmean'`` | ``'mean'`` | ``'sum'``.
If `reduction` is ``'mean'``, the reduced mean loss is returned;
If `reduction` is ``'batchmean'``, the sum loss divided by batch size is returned;
if `reduction` is ``'sum'``, the reduced sum loss is returned;
if `reduction` is ``'none'``, no reduction will be apllied.
Default is ``'none'``.
super(RDropLoss, self).__init__()
if reduction not in ['sum', 'mean', 'none', 'batchmean']:
raise ValueError(
"'reduction' in 'RDropLoss' should be 'sum', 'mean' 'batchmean', or 'none', "
"but received {}.".format(reduction))
self.reduction = reduction
def forward(self, p, q, pad_mask=None):
p(obj:`Tensor`): the first forward logits of training examples.
q(obj:`Tensor`): the second forward logits of training examples.
pad_mask(obj:`Tensor`, optional): The Tensor containing the binary mask to index with, it's data type is bool.
loss(obj:`Tensor`): the rdrop loss of p and q
p_loss = F.kl_div(F.log_softmax(p, axis=-1), F.softmax(q, axis=-1), reduction=self.reduction)
q_loss = F.kl_div(F.log_softmax(q, axis=-1), F.softmax(p, axis=-1), reduction=self.reduction)
# pad_mask is for seq-level tasks
if pad_mask is not None:
p_loss = paddle.masked_select(p_loss, pad_mask)
q_loss = paddle.masked_select(q_loss, pad_mask)
# You can choose whether to use function "sum" and "mean" depending on your task
p_loss = p_loss.sum()
q_loss = q_loss.sum()
loss = (p_loss + q_loss) / 2
return loss
The bottom definition of the Rdrop loss function, calculating KL divergence from each other, maintaining symmetry
class QuestionMatching(nn.Layer):
def __init__(self, pretrained_model, dropout=None, rdrop_coef=0.0):
self.ptm = pretrained_model
self.dropout = nn.Dropout(dropout if dropout is not None else 0.1)
# num_labels = 2 (similar or dissimilar)
self.classifier = nn.Linear(self.ptm.config["hidden_size"], 2)
self.rdrop_coef = rdrop_coef
self.rdrop_loss = ppnlp.losses.RDropLoss()
def forward(self,
_, cls_embedding1 = self.ptm(input_ids, token_type_ids, position_ids,
cls_embedding1 = self.dropout(cls_embedding1)
logits1 = self.classifier(cls_embedding1)
# For more information about R-drop please refer to this paper: https://arxiv.org/abs/2106.14448
# Original implementation please refer to this code: https://github.com/dropreg/R-Drop
if self.rdrop_coef > 0 and not do_evaluate:
_, cls_embedding2 = self.ptm(input_ids, token_type_ids, position_ids,
cls_embedding2 = self.dropout(cls_embedding2)
logits2 = self.classifier(cls_embedding2)
kl_loss = self.rdrop_loss(logits1, logits2)
kl_loss = 0.0
return logits1, kl_loss
Loss function reference, equivalent to a batch data input twice, the output cannot be too different, control model parameters by minimizing the loss function control
input_ids, token_type_ids, labels = batch
logits1, kl_loss = model(input_ids=input_ids, token_type_ids=token_type_ids)
correct = metric.compute(logits1, labels)
acc = metric.accumulate()
ce_loss = criterion(logits1, labels)
if kl_loss > 0:
loss = ce_loss + kl_loss * args.rdrop_coef
loss = ce_loss
Display of Final Loss Function
Deep Neural Network (DNN) has recently achieved remarkable success in various fields.When training these large-scale DNN models, regularization techniques, such as L2 Normalization, Batch
Normalization, Dropout, and so on, are indispensable to prevent over-fitting of the models while improving the generalization ability of the models.Among them, Dropout is the most widely used
regularization technique because it simply discards a part of the neurons during training.
Recently, Microsoft Asia Research Institute and Soochow University proposed a further regular method based on Dropout [1]: Regularized Dropout, referred to as R-Drop.Unlike traditional constraints on
neurons (Dropout) or model parameters (DropConnect [2]), R-Drop acts on the output layer of the model, compensating for Dropout inconsistencies in training and testing.Simply put, in each mini-batch,
each data sample has the same model with Dropout twice, and R-Drop uses KL-divergence to constrain the output to be consistent twice.Therefore, R-Drop constrains the output consistency of the two
random submodels due to Dropout.
Because deep neural networks are very easy to over-fit, Dropout method uses random discarding of some neurons in each layer to avoid over-fitting during training.It is precisely because some neurons
are randomly discarded each time, resulting in different submodels after each discard, so Dropout's operation to some extent makes the trained model a combination constraint of multiple
submodels.Based on the randomness that Dropout brings to the network, researchers propose R-Drop to further constrain the output prediction of the network.
First, an acceptable concept is introduced, that is, the same input, the same model, which traverses two Dropout s to get two different distributions, approximating the two path networks as two
different model networks, as shown on the right.
Specifically, when given training data D={x_i, y_I} (I=1)^n, x_for each training sampleI, the network is propagated forward twice, resulting in two output predictions: P_1 (y_i_ x_i), P_2 (y_I |
x_i).Because Dropout randomly discards some of its neurons at a time, P_1 and P_2 are two different prediction probabilities derived from two different subnetworks (derived from the same model), as
shown in Figure 1.R-Drop uses a symmetric Kullback-Leibler (KL) divergence to treat P_1 and P_2 Constraint
amongα Is used to control L_The coefficient of KL^i makes the training of the whole model very simple.In the actual implementation, data x_i You don't need a model twice, you just need x_i Copy one
copy in the same batch.Intuitively, Dropout expects the output of each submodel to be close to the true distribution during training. However, during testing, Dropout closure makes the model average
only in parameter space, so there is inconsistency between training and testing.R-Drop, on the other hand, deliberately constrains the outputs between sub-models during training to constrain the
parameter space so that different outputs can be consistent, thus reducing the inconsistency between training and testing.In addition, researchers also explained the control of R-Drop constraints on
the degree of freedom of the model from a theoretical point of view, so as to better improve the generalization performance of the model.
Because of the asymmetry of the KL divergence itself, the author indirectly uses the globally symmetric KL divergence by exchanging the positions of the two distributions, which is called two-way KL
divergence in this paper.In addition, the model is routinely trained for NLL loss items of both distributions.The final loss is as follows: The traditional loss function, which is derived from the
maximum likelihood function, is beneficial from the perspective of probability theory.
In order to save training time, instead of duplicating the same input twice, input sentences are stitched together and [formulae], which is equivalent to doubling the batch size. This saves a lot of
training time, of course, compared to the original model, this will increase the training time of each step.Because the training complexity of the model increases, more time is required to
converge.The training is as follows:
1 k-step R-Drop
The model experimented with four step strategies, 1, 2, 5, and 10. The training results are shown in the following figure. It can be found that with the increase of k value, the effect of the model
is deteriorating.Therefore, the best results of the above experiments are obtained when step is 1.
1. m-time R-Drop
In the model diagram given above, the same input will go in and out twice, get different results, and then train, but can you enter three or more times?The paper experimented with three inputs
and found that there was no big difference between the two, so it was considered that the two inputs had a strong regularization effect on the model.
2. Two Dropout Rates
For the same input twice, the probability of dropout is the same, so different results will be better. The experimental results are as follows:
|
{"url":"https://www.fatalerrors.org/a/text_matching-question_matching-r-drop-stronger-regularized-dropout.html","timestamp":"2024-11-07T09:04:38Z","content_type":"text/html","content_length":"19067","record_id":"<urn:uuid:9fc4873f-f914-44ff-8624-dd96ab402cde>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00865.warc.gz"}
|
Heat and Mass Transfer in a Cylindrical Heat Pipe with Idealized Porous Lining
Master of Science in Mechanical Engineering (MSME)
Mechanical Engineering
Many devices use heat pipes for cooling. They can be of different cross-sectional shapes and can range from 15 m long to 10 mm. In this work, the heat and mass transfer in a cylindrical heat pipe is
modelled. The heat pipe is lined with a wick next to the wall. The wick consists of straight capillaries that run the whole length of the pipe and radially aligned capillaries that span the width of
the wick. The capillaries are filled with a partially wetting liquid, and the center of the pipe is filled with its vapor. The radial capillaries are connected to the axial capillaries and are opened
to the vapor region at the wick surface. The pipe is initially charged with an amount of liquid such that the liquid-vapor interface at the radial capillary openings is flat. When one end of the pipe
is heated, the liquid evaporates and increases the vapor pressure. Hence, the vapor is driven to the cold end where it condenses and releases latent heat. The condensate moves back to the hot end
through the capillaries in the wick. Steady-flow problem is solved in this work assuming a small imposed temperature difference between the two ends making the temperature profile skew-symmetric. So,
we only need to focus on the heated half. Also, since the pipe is slender, the axial flow gradients are much smaller than the cross-stream gradients. Therefore, evaporative flow in a cross-sectional
plane can be treated as two-dimensional. Evaporation rate in each pore is solved in the limit the evaporation number to find that the liquid evaporates mainly in a boundary layer at the contact line.
An analytical solution is obtained for the leading order evaporation rate. Therefore, we find the analytic solutions for the temperature profile and pressure distributions along the pipe. Two
dimensionless numbers emerge from the momentum and energy equations: the heat pipe number, H, which is the ratio of heat transfer by vapor flow to conductive heat transfer in the liquid and wall, and
the evaporation exponent, S, which controls the evaporation gradient along the pipe. Conduction in the liquid and wall dominates in the limit and . When and , vapor-flow heat transfer dominates and a
thermal boundary layer appears at the hot end whose thickness scales as S-1L, where L is the half length of the pipe. A similar boundary layer exists at the cold end. The regions outside the boundary
layer is uniform. These regions correspond to the evaporating, adiabatic and the condensing regions commonly observed in heat pipes. For fixed cross-sectional pipe configuration, we find an optimal
pipe length for evaporative heat transfer. For fixed pipe length, we also find an optimal wick thickness for evaporative heat transfer. These optimal pipe length and wick thickness can help to
improve the design of a heat pipe and are found for the first time.
Document Availability at the Time of Submission
Secure the entire work for patent and/or proprietary purposes for a period of one year. Student has submitted appropriate documentation which states: During this period the copyright owner also
agrees not to exercise her/his ownership rights, including public use in works, without prior authorization from LSU. At the end of the one year period, either we or LSU may request an automatic
extension for one additional year. At the end of the one year secure period (or its extension, if such is requested), the work will be released for access worldwide.
Recommended Citation
Regmi, Pramesh, "Heat and Mass Transfer in a Cylindrical Heat Pipe with Idealized Porous Lining" (2017). LSU Master's Theses. 4596.
Committee Chair
Wong, Harris H
|
{"url":"https://repository.lsu.edu/gradschool_theses/4596/","timestamp":"2024-11-09T12:49:12Z","content_type":"text/html","content_length":"42211","record_id":"<urn:uuid:02155f60-9894-4f61-846d-b575c070f32d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00194.warc.gz"}
|
Fundamentals Archives | Civil Engineering Pdf
Indeterminate Structures Analysis McGraw Hill: Indeterminate Structures Analysis McGraw Hill is a book written by Wangand published by McGraw Hill Publications. While many other books on the subject
deal with all kinds of structures and are more general, this one deals with indeterminate structures so that the students can learn how to analyze structures that cannot be determined statically.
Indeterminate […]
» Read more
|
{"url":"https://civilengineeringpdf.com/category/fundamentals/","timestamp":"2024-11-12T19:54:49Z","content_type":"text/html","content_length":"108928","record_id":"<urn:uuid:6d28d46e-dd98-4c3a-87ad-c7ddd8bfbcf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00742.warc.gz"}
|
Reinforcement Learning and Control
Methods for training agents to make sequences of decisions by rewarding desired behaviors and punishing undesired ones, used in robotics, game playing, and more.
Tameem Adel, Adrian Weller, June 2019. (In 36th International Conference on Machine Learning). Long Beach.
One of the challenges to reinforcement learning (RL) is scalable transferability among complex tasks. Incorporating a graphical model (GM), along with the rich family of related methods, as a basis
for RL frameworks provides potential to address issues such as transferability, generalisation and exploration. Here we propose a flexible GM-based RL framework which leverages efficient inference
procedures to enhance generalisation and transfer power. In our proposed transferable and information-based graphical model framework ‘TibGM’, we show the equivalence between our mutual
information-based objective in the GM, and an RL consolidated objective consisting of a standard reward maximisation target and a generalisation/transfer objective. In settings where there is a
sparse or deceptive reward signal, our TibGM framework is flexible enough to incorporate exploration bonuses depicting intrinsic rewards. We empirically verify improved performance and exploration
B. Bischoff, D. Nguyen-Tuong, D. van Hoof, A. McHutchon, Carl Edward Rasmussen, A. Knoll, M. P. Deisenroth, 2014. (In IEEE International Conference on Robotics and Automation). Hong Kong, China.
IEEE. DOI: 10.1109/ICRA.2014.6907422.
In many complex robot applications, such as grasping and manipulation, it is difficult to program desired task solutions beforehand, as robots are within an uncertain and dynamic environment. In such
cases, learning tasks from experience can be a useful alternative. To obtain a sound learning and generalization performance, machine learning, especially, reinforcement learning, usually requires
sufficient data. However, in cases where only little data is available for learning, due to system constraints and practical issues, reinforcement learning can act suboptimally. In this paper, we
investigate how model-based reinforcement learning, in particular the probabilistic inference for learning control method (PILCO), can be tailored to cope with the case of sparse data to speed up
learning. The basic idea is to include further prior knowledge into the learning process. As PILCO is built on the probabilistic Gaussian processes framework, additional system knowledge can be
incorporated by defining appropriate prior distributions, e.g. a linear mean Gaussian prior. The resulting PILCO formulation remains in closed form and analytically tractable. The proposed approach
is evaluated in simulation as well as on a physical robot, the Festo Robotino XT. For the robot evaluation, we employ the approach for learning an object pick-up task. The results show that by
including prior knowledge, policy learning can be sped up in presence of sparse data.
A minimum relative entropy principle for adaptive control in linear quadratic regulators
Daniel A. Braun, Pedro A. Ortega, 2010. (In Proceedings of the 7th international conference on informatics in control, automation and robotics).
The design of optimal adaptive controllers is usually based on heuristics, because solving Bellman’s equations over information states is notoriously intractable. Approximate adaptive controllers
often rely on the principle of certainty-equivalence where the control process deals with parameter point estimates as if they represented “true” parameter values. Here we present a stochastic
control rule instead where controls are sampled from a posterior distribution over a set of probabilistic input-output models and the true model is identified by Bayesian inference. This allows
reformulating the adaptive control problem as an inference and sampling problem derived from a minimum relative entropy principle. Importantly, inference and action sampling both work forward in time
and hence such a Bayesian adaptive controller is applicable on-line. We demonstrate the improved performance that can be achieved by such an approach for linear quadratic regulator examples.
Daniel A. Braun, Pedro A. Ortega, Evangelos Theodorou, Stefan Schaal, 2011. (In 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning).
Path integral methods have recently been shown to be applicable to a very general class of optimal control problems. Here we examine the path integral formalism from a decision-theoretic point of
view, since an optimal controller can always be regarded as an instance of a perfectly rational decision-maker that chooses its actions so as to maximize its expected utility. The problem with
perfect rationality is, however, that finding optimal actions is often very difficult due to prohibitive computational resource costs that are not taken into account. In contrast, a bounded rational
decision-maker has only limited resources and therefore needs to strike some compromise between the desired utility and the required resource costs. In particular, we suggest an information-theoretic
measure of resource costs that can be derived axiomatically. As a consequence we obtain a variational principle for choice probabilities that trades off maximizing a given utility criterion and
avoiding resource costs that arise due to deviating from initially given default choice probabilities. The resulting bounded rational policies are in general probabilistic. We show that the solutions
found by the path integral formalism are such bounded rational policies. Furthermore, we show that the same formalism generalizes to discrete control problems, leading to linearly solvable bounded
rational control policies in the case of Markov systems. Importantly, Bellman’s optimality principle is not presupposed by this variational principle, but it can be derived as a limit case. This
suggests that the information theoretic formalization of bounded rationality might serve as a general principle in control design that unifies a number of recently reported approximate optimal
control methods both in the continuous and discrete domain.
Arunkumar Byravan, Leonard Hasenclever, Piotr Trochim, Mehdi Mirza, Alessandro Davide Ialongo, Yuval Tassa, Jost Tobias Springenberg, Abbas Abdolmaleki, Nicolas Heess, Josh Merel, Martin Riedmiller,
2022. (In 10th International Conference on Learning Representations).
There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various
challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC.
We show that MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency with respect to model-free methods.
However, we find that well-tuned model-free agents are strong baselines even for high DoF control problems. Finally, we show that it is possible to distil a model-based planner into a policy that
amortizes the planning computation without any loss of performance.
Jan-Peter Calliess, 2016. (arXiv).
Techniques known as Nonlinear Set Membership prediction, Lipschitz Interpolation or Kinky Inference are approaches to machine learning that utilise presupposed Lipschitz properties to compute
inferences over unobserved function values. Provided a bound on the true best Lipschitz constant of the target function is known a priori they offer convergence guarantees as well as bounds around
the predictions. Considering a more general setting that builds on Hölder continuity relative to pseudo-metrics, we propose an online method for estimating the Hoelder constant online from function
value observations that possibly are corrupted by bounded observational errors. Utilising this to compute adaptive parameters within a kinky inference rule gives rise to a nonparametric machine
learning method, for which we establish strong universal approximation guarantees. That is, we show that our prediction rule can learn any continuous function in the limit of increasingly dense data
to within a worst-case error bound that depends on the level of observational uncertainty. We apply our method in the context of nonparametric model-reference adaptive control (MRAC). Across a range
of simulated aircraft roll-dynamics and performance metrics our approach outperforms recently proposed alternatives that were based on Gaussian processes and RBF-neural networks. For discrete-time
systems, we provide stability guarantees for our learning-based controllers both for the batch and the online learning setting.
Jan-Peter Calliess, Stephen Roberts, Carl Edward Rasmussen, Jan Maciejowski, 2018. (In Proceedings of the European Control Conference).
Methods known as Lipschitz Interpolation or Nonlinear Set Membership regression have become established tools for nonparametric system-identification and data-based control. They utilise presupposed
Lipschitz properties to compute inferences over unobserved function values. Unfortunately, they rely on the a priori knowledge of a Lipschitz constant of the underlying target function which serves
as a hyperparameter. We propose a closed-form estimator of the Lipschitz constant that is robust to bounded observational noise in the data. The merger of Lipschitz Interpolation with the new
hyperparameter estimator gives a new nonparametric machine learning method for which we derive online learning convergence guarantees. Furthermore, we apply our learning method to model-reference
adaptive control and provide a convergence guarantee on the closed-loop dynamics. In a simulated flight manoeuvre control scenario, we compare the performance of our approach to recently proposed
alternative learning-based controllers.
Krzysztof Choromanski, David Cheikhi, Jared Davis, Valerii Likhosherstov, Achille Nazaret, Achraf Bahamou, Xingyou Song, Mrugank Akarte, Jack Parker-Holder, Jacob Bergquist, Yuan Gao, Aldo Pacchiano,
Tamas Sarlos, Adrian Weller, Vikas Sindhwani, 2020. (In 37th International Conference on Machine Learning).
We present a new class of stochastic, geometrically-driven optimization algorithms on the orthogonal group O(d) and naturally reductive homogeneous manifolds obtained from the action of the rotation
group SO(d). We theoretically and experimentally demonstrate that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks,
reinforcement learning, normalizing flows and metric learning. We show an intriguing connection between efficient stochastic optimization on the orthogonal group and graph theory (e.g. matching
problem, partition functions over graphs, graph-coloring). We leverage the theory of Lie groups and provide theoretical results for the designed class of algorithms. We demonstrate broad
applicability of our methods by showing strong performance on the seemingly unrelated tasks of learning world models to obtain stable policies for the most difficult Humanoid agent from OpenAI Gym
and improving convolutional neural networks.
Will Dabney, Mark Rowland, Marc G. Bellemare, Rémi Munos, February 2018. (In 32nd AAAI Conference on Artificial Intelligence). New Orleans.
In reinforcement learning an agent interacts with the environment by taking actions and observing the next state and reward. When sampled probabilistically, these state transitions, rewards, and
actions can all induce randomness in the observed long-term return. Traditionally, reinforcement learning algorithms average over this randomness to estimate the value function. In this paper, we
build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean. That is, we
examine methods of learning the value distribution instead of the value function. We give results that close a number of gaps between the theoretical and algorithmic results given by Bellemare,
Dabney, and Munos (2017). First, we extend existing results to the approximate distribution setting. Second, we present a novel distributional reinforcement learning algorithm consistent with our
theoretical formulation. Finally, we evaluate this new algorithm on the Atari 2600 games, observing that it significantly outperforms many of the recent improvements on DQN, including the related
distributional algorithm C51.
Marc Peter Deisenroth, 2010. Karlsruhe Institute of Technology, Karlsruhe, Germany.
In many research areas, including control and medical applications, we face decision-making problems where data are limited and/or the underlying generative process is complicated and partially
unknown. In these scenarios, we can profit from algorithms that learn from data and aid decision making. Reinforcement learning (RL) is a general computational approach to experience-based
goal-directed learning for sequential decision making under uncertainty. However, RL often lacks efficiency in terms of the number of required trials when no task-specific knowledge is available.
This lack of efficiency makes RL often inapplicable to (optimal) control problems. Thus, a central issue in RL is to speed up learning by extracting more information from available experience. The
contributions of this dissertation are threefold: 1. We propose PILCO, a fully Bayesian approach for efficient RL in continuous-valued state and action spaces when no expert knowledge is available.
PILCO is based on well-established ideas from statistics and machine learning. PILCO’s key ingredient is a probabilistic dynamics model learned from data, which is implemented by a Gaussian process
(GP). The GP carefully quantifies knowledge by a probability distribution over plausible dynamics models. By averaging over all these models during long-term planning and decision making, PILCO takes
uncertainties into account in a principled way and, therefore, reduces model bias, a central problem in model-based RL. 2. Due to its generality and efficiency, PILCO can be considered a conceptual
and practical approach to jointly learning models and controllers when expert knowledge is difficult to obtain or simply not available. For this scenario, we investigate PILCO’s properties its
applicability to challenging real and simulated nonlinear control problems. For example, we consider the tasks of learning to swing up a double pendulum attached to a cart or to balance a unicycle
with five degrees of freedom. Across all tasks we report unprecedented automation and an unprecedented learning efficiency for solving these tasks. 3. As a step toward pilco’s extension to partially
observable Markov decision processes, we propose a principled algorithm for robust filtering and smoothing in GP dynamic systems. Unlike commonly used Gaussian filters for nonlinear systems, it does
neither rely on function linearization nor on finite-sample representations of densities. Our algorithm profits from exact moment matching for predictions while keeping all computations analytically
tractable. We present experimental evidence that demonstrates the robustness and the advantages of our method over unscented Kalman filters, the cubature Kalman filter, and the extended Kalman
Gaussian Processes for Data-Efficient Learning in Robotics and Control
Marc Peter Deisenroth, Dieter Fox, Carl Edward Rasmussen, 2015. (IEEE Transactions on Pattern Analysis and Machine Intelligence). DOI: 10.1109/TPAMI.2013.218.
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise
required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as
robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations,
realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this article, we follow a different approach and speed up learning by extracting more information
from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and
controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an
unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Marc Peter Deisenroth, Carl Edward Rasmussen, June 2009. (In Multidisciplinary Symposium on Reinforcement Learning). Montréal, QC, Canada.
In contrast to humans or animals, artificial learners often require more trials when learning motor control tasks solely based on experience. Efficient autonomous learners will reduce the amount of
engineering required to solve control problems. By using probabilistic forward models, we can employ two key ingredients of biological learning systems to speed up artificial learning. We present a
consistent and coherent Bayesian framework that allows for efficient autonomous experience-based learning. We demonstrate the success of our learning algorithm by applying it to challenging nonlinear
control problems in simulation and in hardware.
Marc Peter Deisenroth, Carl Edward Rasmussen, September 2009. (In 10th International PhD Workshop on Systems and Control). Hluboká nad Vltavou, Czech Republic.
Artificial learners often require many more trials than humans or animals when learning motor control tasks in the absence of expert knowledge. We implement two key ingredients of biological learning
systems, generalization and incorporation of uncertainty into the decision-making process, to speed up artificial learning. We present a coherent and fully Bayesian framework that allows for
efficient artificial learning in the absence of expert knowledge. The success of our learning framework is demonstrated on challenging nonlinear control problems in simulation and in hardware.
Marc Peter Deisenroth, Carl Edward Rasmussen, 2011. (In 28th International Conference on Machine Learning).
In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a
principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from
scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy
improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.
Comment: web site
Marc Peter Deisenroth, Carl Edward Rasmussen, Dieter Fox, June 2011. (In 9th International Conference on Robotics: Science & Systems). Los Angeles, CA, USA.
Over the last years, there has been substantial progress in robust manipulation in unstructured environments. The long-term goal of our work is to get away from precise, but very expensive robotic
systems and to develop affordable, potentially imprecise, self-adaptive manipulator systems that can interactively perform tasks such as playing with children. In this paper, we demonstrate how a
low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials - from scratch. Our manipulator is inaccurate and provides no pose feedback. For
learning a controller in the work space of a Kinect-style depth camera, we use a model-based reinforcement learning technique. Our learning method is data efficient, reduces model bias, and deals
with several noise sources in a principled way during long-term planning. We present a way of incorporating state-space constraints into the learning process and analyze the learning gain by
exploiting the sequential structure of the stacking task.
Comment: project site
Marc Peter Deisenroth, Carl Edward Rasmussen, Jan Peters, April 2008. (In Proceedings of the 16th European Symposium on Artificial Neural Networks (ESANN 2008)). Bruges, Belgium.
Finding an optimal policy in a reinforcement learning (RL) framework with continuous state and action spaces is challenging. Approximate solutions are often inevitable. GPDP is an approximate dynamic
programming algorithm based on Gaussian process (GP) models for the value functions. In this paper, we extend GPDP to the case of unknown transition dynamics. After building a GP model for the
transition dynamics, we apply GPDP to this model and determine a continuous-valued policy in the entire state space. We apply the resulting controller to the underpowered pendulum swing up. Moreover,
we compare our results on this RL task to a nearly optimal discrete DP solution in a fully known environment.
Comment: code. slides
Marc Peter Deisenroth, Carl Edward Rasmussen, Jan Peters, March 2009. (Neurocomputing). Elsevier B. V.. DOI: 10.1016/j.neucom.2008.12.019.
Reinforcement learning (RL) and optimal control of systems with continuous states and actions require approximation techniques in most interesting cases. In this article, we introduce Gaussian
process dynamic programming (GPDP), an approximate value function-based RL algorithm. We consider both a classic optimal control problem, where problem-specific prior knowledge is available, and a
classic RL problem, where only very general priors can be used. For the classic optimal control problem, GPDP models the unknown value functions with Gaussian processes and generalizes dynamic
programming to continuous-valued states and actions. For the RL problem, GPDP starts from a given initial state and explores the state space using Bayesian active learning. To design a fast learner,
available data have to be used efficiently. Hence, we propose to learn probabilistic models of the a priori unknown transition dynamics and the value functions on the fly. In both cases, we
successfully apply the resulting continuous-valued controllers to the under-actuated pendulum swing up and analyze the performances of the suggested algorithms. It turns out that GPDP uses data very
efficiently and can be applied to problems, where classic dynamic programming would be cumbersome.
Comment: code.
Finale Doshi-Velez, Zoubin Ghahramani, 2011. (In 33rd Annual Meeting of the Cognitive Science Society). Boston, MA.
It is commonly stated that reinforcement learning (RL) algorithms learn slower than humans. In this work, we investigate this claim using two standard problems from the RL literature. We compare the
performance of human subjects to RL techniques. We find that context—the meaningfulness of the observations—–plays a significant role in the rate of human RL. Moreover, without contextual
information, humans often fare much worse than classic algorithms. Comparing the detailed responses of humans and RL algorithms, we also find that humans appear to employ rather different strategies
from standard algorithms, even in cases where they had indistinguishable performance to them. Our research both sheds light on human RL and provides insights for improving RL algorithms.
Benjamin Eysenbach, Shixiang Gu, Julian Ibarz, Sergey Levine, Apr 2018. (In 6th International Conference on Learning Representations). Vancouver CANADA.
Deep reinforcement learning algorithms can learn complex behavioral skills, but real-world application of these methods requires a large amount of experience to be collected by the agent. In
practical settings, such as robotics, this involves repeatedly attempting a task, resetting the environment between each attempt. However, not all tasks are easily or automatically reversible. In
practice, this learning process requires extensive human intervention. In this work, we propose an autonomous method for safe and efficient reinforcement learning that simultaneously learns a forward
and reset policy, with the reset policy resetting the environment for a subsequent attempt. By learning a value function for the reset policy, we can automatically determine when the forward policy
is about to enter a non-reversible state, providing for uncertainty-aware safety aborts. Our experiments illustrate that proper use of the reset policy can greatly reduce the number of manual resets
required to learn a task, can reduce the number of unsafe actions that lead to non-reversible states, and can automatically induce a curriculum.
Comment: [Video]
Zoubin Ghahramani, Sam T. Roweis, 1998. (In NIPS). Edited by Michael J. Kearns, Sara A. Solla, David A. Cohn. The MIT Press. ISBN: 0-262-11245-0.
The Expectation Maximization (EM) algorithm is an iterative procedure for maximum likelihood parameter estimation from data sets with missing or hidden variables. It has been applied to system
identification in linear stochastic state-space models, where the state variables are hidden from the observer and both the state and the parameters of the model have to be estimated simultaneously
[9]. We present a generalization of the EM algorithm for parameter estimation in nonlinear dynamical systems. The “expectation” step makes use of Extended Kalman Smoothing to estimate the state,
while the “maximization” step re-estimates the parameters using these uncertain state estimates. In general, the nonlinear maximization step is difficult because it requires integrating out the
uncertainty in the states. However, if Gaussian radial basis function (RBF) approximators are used to model the nonlinearities, the integrals become tractable and the maximization step can be solved
via systems of linear equations.
Shixiang Gu, Ethan Holly, Timothy Lillicrap, Sergey Levine, May 2017. (In IEEE International Conference on Robotics and Automation). SINGAPORE.
Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement
learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered
policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep
reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a
recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough
to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously.
Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or
manually designed representations.
Comment: [Google Blogpost] [MIT Technology Review] [Video]
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine, April 2017. (In 5th International Conference on Learning Representations). Toulon France.
Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample
complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and
Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients
with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and
stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to
derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization
(TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym’s
MuJoCo continuous control environments.
Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Bernhard Schölkopf, Sergey Levine, Dec 2017. (In Advances in Neural Information Processing Systems 31). Long Beach USA.
Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques. On the other hand, on-policy
algorithms are often more stable and easier to use. This paper examines, both theoretically and empirically, approaches to merging on- and off-policy updates for deep reinforcement learning.
Theoretical results show that off-policy updates with a value function estimator can be interpolated with on-policy policy gradient updates whilst still satisfying performance bounds. Our analysis
uses control variate methods to produce a family of policy gradient algorithms, with several recently proposed algorithms being special cases of this family. We then provide an empirical comparison
of these techniques with the remaining algorithmic details fixed, and show how different mixing of off-policy gradient estimates with on-policy samples contribute to improvements in empirical
performance. The final algorithm provides a generalization and unification of existing deep policy gradient techniques, has theoretical guarantees on the bias introduced by off-policy updates, and
improves on the state-of-the-art model-free deep RL methods on a number of OpenAI Gym continuous control benchmarks.
Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine, June 2016. (In 33rd International Conference on Machine Learning). New York USA.
Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions.
However, the sample complexity of model-free algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we
explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the
efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized adantage functions (NAF), as an alternative to the more commonly used policy
gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic
control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local
linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.
Joseph Hall, Carl Edward Rasmussen, Jan Maciejowski, 2011. (In Proceedings of 50th IEEE Conference on Decision and Control and European Control Conference).
The contribution described in this paper is an algorithm for learning nonlinear, reference tracking, control policies given no prior knowledge of the dynamical system and limited interaction with the
system through the learning process. Concepts from the field of reinforcement learning, Bayesian statistics and classical control have been brought together in the formulation of this algorithm which
can be viewed as a form indirect self tuning regulator. On the task of reference tracking using the inverted pendulum it was shown to yield generally improved performance on the best controller
derived from the standard linear quadratic method using only 30 s of total interaction with the system. Finally, the algorithm was shown to work on the double pendulum proving its ability to solve
nontrivial control tasks.
Joseph Hall, Carl Edward Rasmussen, Jan Maciejowski, 2012. (In 51st IEEE Conference on Decision and Control).
Gaussian processes are gaining increasing popularity among the control community, in particular for the modelling of discrete time state space systems. However, it has not been clear how to
incorporate model information, in the form of known state relationships, when using a Gaussian process as a predictive model. An obvious example of known prior information is position and velocity
related states. Incorporation of such information would be beneficial both computationally and for faster dynamics learning. This paper introduces a method of achieving this, yielding faster dynamics
learning and a reduction in computational effort from O(Dn2) to O((D-F)n2) in the prediction stage for a system with D states, F known state relationships and n observations. The effectiveness of the
method is demonstrated through its inclusion in the PILCO learning algorithm with application to the swing-up and balance of a torque-limited pendulum and the balancing of a robotic unicycle in
David Janz, Jiri Hron, Przemyslaw Mazur, José Miguel Hernández-Lobato, Katja Hofmann, Sebastian Tschiatschek, 2019. (NeurIPS).
Posterior sampling for reinforcement learning (PSRL) is an effective method for balancing exploration and exploitation in reinforcement learning. Randomised value functions (RVF) can be viewed as a
promising approach to scaling PSRL. However, we show that most contemporary algorithms combining RVF with neural network function approximation do not possess the properties which make PSRL
effective, and provably fail in sparse reward problems. Moreover, we find that propagation of uncertainty, a property of PSRL previously thought important for exploration, does not preclude this
failure. We use these insights to design Successor Uncertainties (SU), a cheap and easy to implement RVF algorithm that retains key properties of PSRL. SU is highly effective on hard tabular
exploration benchmarks. Furthermore, on the Atari 2600 domain, it surpasses human performance on 38 of 49 games tested (achieving a median human normalised score of 2.09), and outperforms its closest
RVF competitor, Bootstrapped DQN, on 36 of those.
Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jose Miguel Hernndez Lobato, Richard E. Turner, Douglas Eck, Aug 2017. (In 34th International Conference on Machine Learning). Sydney AUSTRALIA.
This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as
well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is
treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to
the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1)
generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated
sequences, while maintaining information learned from data.
Comment: [MIT Technology Review] [Video]
Juš Kocijan, Roderick Murray-Smith, Carl Edward Rasmussen, Agathe Girard, 2004. (In American Control Conference). (Proceedings of the ACC 2004). Boston, MA.
Gaussian process models provide a probabilistic non-parametric modelling approach for black-box identi cation of non-linear dynamic systems. The Gaussian processes can highlight areas of the input
space where prediction quality is poor, due to the lack of data or its complexity, by indicating the higher variance around the predicted mean. Gaussian process models contain noticeably less coef
cients to be optimised. This paper illustrates possible application of Gaussian process models within model-based predictive control. The extra information provided within Gaussian process model is
used in predictive control, where optimisation of control signal takes the variance information into account. The predictive control principle is demonstrated on control of pH process benchmark.
Juš Kocijan, Roderick Murray-Smith, Carl Edward Rasmussen, Bojan Likar, 2003. (In IEEE Region 8 Eurocon 2003: Computer as a Tool). Edited by B. Zajc, M. Tkal.
This paper describes model-based predictive control based on Gaussian processes. Gaussian process models provide a probabilistic non-parametric modelling approach for black-box identification of
non-linear dynamic systems. It offers more insight in variance of obtained model response, as well as fewer parameters to determine than other models. The Gaussian processes can highlight areas of
the input space where prediction quality is poor, due to the lack of data or its complexity, by indicating the higher variance around the predicted mean. This property is used in predictive control,
where optimisation of control signal takes the variance information into account. The predictive control principle is demonstrated on a simulated example of nonlinear system.
Malte Kuß, Carl Edward Rasmussen, April 2006. (In Advances in Neural Information Processing Systems 18). Edited by Y. Weiss, B. Schölkopf, J. Platt. Cambridge, MA, USA. Whistler, BC, Canada. The MIT
Gaussian processes are attractive models for probabilistic classification but unfortunately exact inference is analytically intractable. We compare Laplace’s method and Expectation Propagation (EP)
focusing on marginal likelihood estimates and predictive performance. We explain theoretically and corroborate empirically that EP is superior to Laplace. We also compare to a sophisticated MCMC
scheme and show that EP is surprisingly accurate.
Learning-based Nonlinear Model Predictive Control
Daniel Limon, Jan-Peter Calliess, Jan Maciejowski, July 2017. (In IFAC 2017 World Congress). Toulouse, France. DOI: 10.1016/j.ifacol.2017.08.1050.
This paper presents stabilizing Model Predictive Controllers (MPC) in which prediction models are inferred from experimental data of the inputs and outputs of the plant. Using a nonparametric machine
learning technique called LACKI, the estimated (possibly nonlinear) model function together with an estimation of Hoelder constant is provided. Based on these, a number of predictive controllers with
stability guaranteed by design are proposed. Firstly, the case when the prediction model is estimated off- line is considered and robust stability and recursive feasibility is ensured by using
tightened constraints in the optimisation problem. This controller has been extended to the more interesting and complex case: the online learning of the model, where the new data collected from
feedback is added to enhance the prediction model. A on-line learning MPC based on a double sequence of predictions is proposed. Stability of the online learning MPC is proved. These controllers are
illustrated by simulation.
Joseph Marino, Alexandre Piche, Alessandro Davide Ialongo, Yisong Yue, 2021. (In Advances in Neural Information Processing Systems 34). Edited by M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang,
J. Wortman Vaughan. Curran Associates, Inc..
Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference
perspective on RL, policy networks, when used with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly.
However, direct amortized mappings can yield suboptimal policy estimates and restricted distributions, limiting performance and exploration. Given this perspective, we consider the more flexible
class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over direct amortization on benchmark
continuous control tasks.
Rowan McAllister, 2016. University of Cambridge, Department of Engineering, Cambridge, UK.
Applications to learn control of unfamiliar dynamical systems with increasing autonomy are ubiquitous. From robotics, to finance, to industrial processing, autonomous learning helps obviate a heavy
reliance on experts for system identification and controller design. Often real world systems are nonlinear, stochastic, and expensive to operate (e.g. slow, energy intensive, prone to wear and
tear). Ideally therefore, nonlinear systems can be identified with minimal system interaction. This thesis considers data efficient autonomous learning of control of nonlinear, stochastic systems.
Data efficient learning critically requires probabilistic modelling of dynamics. Traditional control approaches use deterministic models, which easily overfit data, especially small datasets. We use
probabilistic Bayesian modelling to learn systems from scratch, similar to the PILCO algorithm, which achieved unprecedented data efficiency in learning control of several benchmarks. We extend PILCO
in three principle ways. First, we learn control under significant observation noise by simulating a filtered control process using a tractably analytic framework of Gaussian distributions. In
addition, we develop the `latent variable belief Markov decision process’ when filters must predict under real-time constraints. Second, we improve PILCO’s data efficiency by directing exploration
with predictive loss uncertainty and Bayesian optimisation, including a novel approximation to the Gittins index. Third, we take a step towards data efficient learning of high-dimensional control
using Bayesian neural networks (BNN). Experimentally we show although filtering mitigates adverse effects of observation noise, much greater performance is achieved when optimising controllers with
evaluations faithful to reality: by simulating closed-loop filtered control if executing closed-loop filtered control. Thus, controllers are optimised w.r.t. how they are used, outperforming filters
applied to systems optimised by unfiltered simulations. We show directed exploration improves data efficiency. Lastly, we show BNN dynamics models are almost as data efficient as Gaussian process
models. Results show data efficient learning of high-dimensional control is possible as BNNs scale to high-dimensional state inputs.
Rowan McAllister, Carl Edward Rasmussen, December 2017. (In Advances in Neural Information Processing Systems 31). Long Beach, California.
We present a data-efficient reinforcement learning method for continuous state-action systems under significant observation noise. Data-efficient solutions under small noise exist, such as PILCO
which learns the cartpole swing-up task in 30s. PILCO evaluates policies by planning state-trajectories using a dynamics model. However, PILCO applies policies to the observed state, therefore
planning in observation space. We extend PILCO with filtering to instead plan in belief space, consistent with partially observable Markov decisions process (POMDP) planning. This enables
data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO
algorithm. We test our method on the cartpole swing-up task, which involves nonlinear dynamics and requires nonlinear control.
Andrew McHutchon, 2014. University of Cambridge, Department of Engineering, Cambridge, UK.
In many scientific disciplines it is often required to make predictions about how a system will behave or to deduce the correct control values to elicit a particular desired response. Efficiently
solving both of these tasks relies on the construction of a model capturing the system’s operation. In the most interesting situations, the model needs to capture strongly nonlinear effects and deal
with the presence of uncertainty and noise. Building models for such systems purely based on a theoretical understanding of underlying physical principles can be infeasibly complex and require a
large number of simplifying assumptions. An alternative is to use a data-driven approach, which builds a model directly from observations. A powerful and principled approach to doing this is to use a
Gaussian Process (GP). In this thesis we start by discussing how GPs can be applied to data sets which have noise affecting their inputs. We present the “Noisy Input GP”, which uses a simple
local-linearisation to refer the input noise into heteroscedastic output noise, and compare it to other methods both theoretically and empirically. We show that this technique leads to a effective
model for nonlinear functions with input and output noise. We then consider the broad topic of GP state space models for application to dynamical systems. We discuss a very wide variety of approaches
for using GPs in state space models, including introducing a new method based on moment-matching, which consistently gave the best performance. We analyse the methods in some detail including
providing a systematic comparison between approximate-analytic and particle methods. To our knowledge such a comparison has not been provided before in this area. Finally, we investigate an automatic
control learning framework, which uses Gaussian Processes to model a system for which we wish to design a controller. Controller design for complex systems is a difficult task and thus a framework
which allows an automatic design directly from data promises to be extremely useful. We demonstrate that the previously published framework cannot cope with the presence of observation noise but that
the introduction of a state space model dramatically improves its performance. This contribution, along with some other suggested improvements opens the door for this framework to be used in
real-world applications.
Roderick Murray-Smith, Daniel Sbarbaro, Carl Edward Rasmussen, Agathe Girard, August 2003. (In IFAC SYSID 2003). Edited by P. Van den Hof, B. Wahlberg, S. Weiland. (Proceedings of the 13th IFAC
Symposium on System Identification). Oxford, UK. Rotterdam, The Netherlands. Elsevier Science Ltd.
Nonparametric Gaussian Process models, a Bayesian statistics approach, are used to implement a nonlinear adaptive control law. Predictions, including propagation of the state uncertainty are made
over a k-step horizon. The expected value of a quadratic cost function is minimised, over this prediction horizon, without ignoring the variance of the model predictions. The general method and its
main features are illustrated on a simulation example.
Pedro A. Ortega, 2011. Department of Engineering, University of Cambridge,
The aim of this thesis is to present a mathematical framework for conceptualizing and constructing adaptive autonomous systems under resource constraints. The first part of this thesis contains a
concise presentation of the foundations of classical agency: namely the formalizations of decision making and learning. Decision making includes: (a) subjective expected utility (SEU) theory, the
framework of decision making under uncertainty; (b) the maximum SEU principle to choose the optimal solution; and (c) its application to the design of autonomous systems, culminating in the Bellman
optimality equations. Learning includes: (a) Bayesian probability theory, the theory for reasoning under uncertainty that extends logic; and (b) Bayes-Optimal agents, the application of Bayesian
probability theory to the design of optimal adaptive agents. Then, two major problems of the maximum SEU principle are highlighted: (a) the prohibitive computational costs and (b) the need for the
causal precedence of the choice of the policy. The second part of this thesis tackles the two aforementioned problems. First, an information-theoretic notion of resources in autonomous systems is
established. Second, a framework for resource-bounded agency is introduced. This includes: (a) a maximum bounded SEU principle that is derived from a set of axioms of utility; (b) an axiomatic model
of probabilistic causality, which is applied for the formalization of autonomous systems having uncertainty over their policy and environment; and (c) the Bayesian control rule, which is derived from
the maximum bounded SEU principle and the model of causality, implementing a stochastic adaptive control law that deals with the case where autonomous agents are uncertain about their policy and
Pedro A. Ortega, Daniel A. Braun, 2010. (In The third conference on artificial general intelligence). Paris. Atlantis Press.
Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that
rewards should be real- valued, additive and order-preserving, where the later implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely
determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected
utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be
achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent’s I/O distribution. Thus, our results allow for
an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics.
Pedro A. Ortega, Daniel A. Braun, 2010. (In The third conference on artificial general intelligence). Paris. Atlantis Press.
Explaining adaptive behavior is a central problem in artificial intelligence research. Here we formalize adaptive agents as mixture distributions over sequences of inputs and outputs (I/O). Each
distribution of the mixture constitutes a “possible world”, but the agent does not know which of the possible worlds it is actually facing. The problem is to adapt the I/O stream in a way that is
compatible with the true world. A natural measure of adaptation can be obtained by the Kullback Leibler (KL) divergence between the I/O distribution of the true world and the I/O distribution
expected by the agent that is uncertain about possible worlds. In the case of pure input streams, the Bayesian mixture provides a well-known solution for this problem. We show, however, that in the
case of I/O streams this solution breaks down, because outputs are issued by the agent itself and require a different probabilistic syntax as provided by intervention calculus. Based on this
calculus, we obtain a Bayesian control rule that allows modeling adaptive behavior with mixture distributions over I/O streams. This rule might allow for a novel approach to adaptive control based on
a minimum KL-principle.
Pedro A. Ortega, Daniel A. Braun, 2010. (Journal of Artificial Intelligence Research). DOI: 10.1613/jair.3062.
This paper proposes a method to construct an adaptive agent that is univemacmacrsal with respect to a given class of experts, where each expert is designed specifically for a particular environment.
This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. If the agent is
a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the agent is active, then its past actions need to be treated as causal interventions on the I/O stream
rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements
adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert.
Pedro A. Ortega, Daniel A. Braun, 2010. Dept. of Engineering, University of Cambridge,
Classic decision-theory is based on the maximum expected utility (MEU) principle, but crucially ignores the resource costs incurred when determining optimal decisions. Here we propose an axiomatic
framework for bounded decision-making that considers resource costs. Agents are formalized as probability measures over input-output streams. We postulate that any such probability measure can be
assigned a corresponding conjugate utility function based on three axioms: utilities should be real-valued, additive and monotonic mappings of probabilities. We show that these axioms enforce a
unique conversion law between utility and probability (and thereby, information). Moreover, we show that this relation can be characterized as a variational principle: given a utility function, its
conjugate probability measure maximizes a free utility functional. Transformations of probability measures can then be formalized as a change in free utility due to the addition of new constraints
expressed by a target utility function. Accordingly, one obtains a criterion to choose a probability measure that trades off the maximization of a target utility function and the cost of the
deviation from a reference distribution. We show that optimal control, adaptive estimation and adaptive control problems can be solved this way in a resource-efficient way. When resource costs are
ignored, the MEU principle is recovered. Our formalization might thus provide a principled approach to bounded rationality that establishes a close link to information theory.
Pedro A. Ortega, Daniel A. Braun, 2011. (In The fourth conference on artificial general intelligence). Springer-Verlag. Lecture Notes on Artificial Intelligence.
Perfectly rational decision-makers maximize expected utility, but crucially ignore the resource costs incurred when determining optimal actions. Here we employ an axiomatic framework for bounded
rational decision-making based on a thermodynamic interpretation of resource costs as information costs. This leads to a variational free utility principle akin to thermodynamical free energy that
trades off utility and information costs. We show that bounded optimal control solutions can be derived from this variational principle, which leads in general to stochastic policies. Furthermore, we
show that risk-sensitive and robust (minimax) control schemes fall out naturally from this framework if the environment is considered as a bounded rational and perfectly rational opponent,
respectively. When resource costs are ignored, the maximum expected utility principle is recovered.
Pedro A. Ortega, Daniel A. Braun, Simon Godsill, 2011. (In The fourth conference on artificial general intelligence). Springer-Verlag. Lecture Notes on Artificial Intelligence.
We present an actor-critic scheme for reinforcement learning in complex domains. The main contribution is to show that planning and I/O dynamics can be separated such that an intractable planning
problem reduces to a simple multi-armed bandit problem, where each lever stands for a potentially arbitrarily complex policy. Furthermore, we use the Bayesian control rule to construct an adaptive
bandit player that is universal with respect to a given class of optimal bandit players, thus indirectly constructing an adaptive agent that is universal with respect to a given class of policies.
Paavo Parmas, Carl Edward Rasmussen, Jan Peters, Kenji Doya, 2018. (In 35th International Conference on Machine Learning).
Previously, the exploding gradient problem has been explained to be central in deep learning and model-based reinforcement learning, because it causes numerical issues and instability in
optimization. Our experiments in model-based reinforcement learning imply that the problem is not just a numerical issue, but it may be caused by a fundamental chaos-like nature of long chains of
nonlinear computations. Not only do the magnitudes of the gradients become large, the direction of the gradients becomes essentially random. We show that reparameterization gradients suffer from the
problem, while likelihood ratio gradients are robust. Using our insights, we develop a model-based policy search framework, Probabilistic Inference for Particle-Based Policy Search (PIPPS), which is
easily extensible, and allows for almost arbitrary models and policies, while simultaneously matching the performance of previous data-efficient learning algorithms. Finally, we invent the total
propagation algorithm, which efficiently computes a union over all pathwise derivative depths during a single backwards pass, automatically giving greater weight to estimators with lower variance,
sometimes improving over reparameterization gradients by 106 times.
Robert Pinsler, Riad Akrour, Takayuki Osa, Jan Peters, Gerhard Neumann, May 2018. (In IEEE International Conference on Robotics and Automation). Brisbane, Australia.
While reinforcement learning has led to promising results in robotics, defining an informative reward function is challenging. Prior work considered including the human in the loop to jointly learn
the reward function and the optimal policy. Generating samples from a physical robot and requesting human feedback are both taxing efforts for which efficiency is critical. We propose to learn reward
functions from both the robot and the human perspectives to improve on both efficiency metrics. Learning a reward function from the human perspective increases feedback efficiency by assuming that
humans rank trajectories according to a low-dimensional outcome space. Learning a reward function from the robot perspective circumvents the need for a dynamics model while retaining the sample
efficiency of model-based approaches. We provide an algorithm that incorporates bi-perspective reward learning into a general hierarchical reinforcement learning framework and demonstrate the merits
of our approach on a toy task and a simulated robot grasping task.
Robert Pinsler, Peter Karkus, Andras Kupcsik, David Hsu, Wee Sun Lee, May 2019. (In IEEE International Conference on Robotics and Automation). Montreal, Canada.
Scarce data is a major challenge to scaling robot learning to truly complex tasks, as we need to generalize locally learned policies over different task contexts. Contextual policy search offers
data-efficient learning and generalization by explicitly conditioning the policy on a parametric context space. In this paper, we further structure the contextual policy representation. We propose to
factor contexts into two components: target contexts that describe the task objectives, e.g. target position for throwing a ball; and environment contexts that characterize the environment, e.g.
initial position or mass of the ball. Our key observation is that experience can be directly generalized over target contexts. We show that this can be easily exploited in contextual policy search
algorithms. In particular, we apply factorization to a Bayesian optimization approach to contextual policy search both in sampling-based and active learning settings. Our simulation results show
faster learning and better generalization in various robotic domains. See our supplementary video: https://youtu.be/MNTbBAOufDY.
Vitchyr Pong, Shixiang Gu, Murtaza Dalal, Sergey Levine, Apr 2018. (In 6th International Conference on Learning Representations). Vancouver CANADA.
Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving
challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring
much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as
model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based
control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance
that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to
state-of-the-art model-based and model-free methods.
Carl Edward Rasmussen, Marc Peter Deisenroth, November 2008. (In Recent Advances in Reinforcement Learning). Edited by S. Girgin, M. Loth, R. Munos, P. Preux, D. Ryabko. Villeneuve d'Ascq, France.
Springer-Verlag. Lecture Notes in Computer Science (LNCS).
We provide a novel framework for very fast model-based reinforcement learning in continuous state and action spaces. The framework requires probabilistic models that explicitly characterize their
levels of confidence. Within this framework, we use flexible, non-parametric models to describe the world based on previously collected experience. We demonstrate learning on the cart-pole problem in
a setting where we provide very limited prior knowledge about the task. Learning progresses rapidly, and a good policy is found after only a hand-full of iterations.
Comment: videos and more. slides.
Carl Edward Rasmussen, Malte Kuß, December 2004. (In Advances in Neural Information Processing Systems 16). Edited by S. Thrun, L.K. Saul, B. Schölkopf. Cambridge, MA, USA. The MIT Press.
We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation
of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability
of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the
focus of much of reinforcement learning.
Mark Rowland, Marc G. Bellemare, Will Dabney, Rémi Munos, Yee Whye Teh, April 2018. (In 21st International Conference on Artificial Intelligence and Statistics). Playa Blanca, Lanzarote, Canary
Distributional approaches to value-based reinforcement learning model the entire distribution of returns, rather than just their expected values, and have recently been shown to yield
state-of-the-art empirical performance. This was demonstrated by the recently proposed C51 algorithm, based on categorical distributional reinforcement learning (CDRL) [Bellemare et al., 2017].
However, the theoretical properties of CDRL algorithms are not yet well understood. In this paper, we introduce a framework to analyse CDRL algorithms, establish the importance of the projected
distributional Bellman operator in distributional RL, draw fundamental connections between CDRL and the Cramér distance, and give a proof of convergence for sample-based categorical distributional
reinforcement learning algorithms.
Sebastian Thrun, Yufeng Liu, Daphne Koller, Andrew Y. Ng, Zoubin Ghahramani, Hugh F. Durrant-Whyte, 2004. (I. J. Robotic Res.).
This paper describes a scalable algorithm for the simultaneous mapping and localization (SLAM) problem. SLAM is the problem of acquiring a map of a static environment with a mobile robot. The vast
majority of SLAM algorithms are based on the extended Kalman filter (EKF). This paper advocates an algorithm that relies on the dual of the EKF, the extended information filter (EIF). We show that
when represented in the information form, map posteriors are dominated by a small number of links that tie together nearby features in the map. This insight is developed into a sparse variant of the
EIF, called the sparse extended information filters (SEIF). SEIFs represent maps by graphical networks of features that are locally interconnected, where links represent relative information between
pairs of nearby features, as well as information about the robot’s pose relative to the map. We show that all essential update equations in SEIFs can be executed in constant time, irrespective of the
size of the map. We also provide empirical results obtained for a benchmark data set collected in an outdoor environment, and using a multi-robot mapping simulation.
No matching items
Back to top
|
{"url":"https://mlg.eng.cam.ac.uk/research/rl/","timestamp":"2024-11-14T17:53:00Z","content_type":"application/xhtml+xml","content_length":"134065","record_id":"<urn:uuid:6a470a41-2d14-4008-85dd-f12f3c48c957>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00511.warc.gz"}
|
Model Attributes
Model Attributes#
These are model attributes, meaning that they are associated with the overall model (as opposed to being associated with a particular variable or constraint of the model). You should use one of the
various get routines to retrieve the value of an attribute. These are described at the beginning of this section. For the object-oriented interfaces, model attributes are retrieved by invoking the
get method on the model object itself. For attributes that can be modified directly by the user, you can use one of the various set methods.
Attempting to query an attribute that is not available will produce an error. In C, the attribute query routine will return a DATA_NOT_AVAILABLE error code. The object-oriented interfaces will throw
an exception.
Additional model attributes can be found in the Quality Attributes, Multi-objective Attributes, and Multi-Scenario Attributes sections.
The number of linear constraints in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of decision variables in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of Special Ordered Set (SOS) constraints in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of quadratic constraints in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of general constraints in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of non-zero coefficients in the linear constraints of the model. For models with more than 2 billion non-zero coefficients use DNumNZs.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
The number of non-zero coefficients in the linear constraints of the model. This attribute is provided in double precision format to accurately count the number of non-zeros in models that contain
more than 2 billion non-zero coefficients.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of terms in the lower triangle of the Q matrix in the quadratic objective.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of non-zero coefficients in the quadratic constraints.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of integer variables in the model. This includes both binary variables and general integer variables.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of binary variables in the model.
For examples of how to query or modify attributes, refer to our Attribute Examples.
The number of variables in the model with piecewise-linear objective functions. You can query the function for a specific variable using the appropriate getPWLObj method for your language (in C, C++,
C#, Java, and Python).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: string
• Modifiable: Yes
The name of the model. The name has no effect on Gurobi algorithms. It is output in the Gurobi log file when a model is solved, and when a model is written to a file.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: int
• Modifiable: Yes
Optimization sense. The default 1 value indicates that the objective is to minimize the objective. Setting this attribute to -1 changes the sense to maximization.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: Yes
A constant value that is added into the model objective. The default value is 0.
For examples of how to query or modify attributes, refer to our Attribute Examples.
A hash value computed on model data and attributes that can influence the optimization process. The intent is that models that differ in any meaningful way will have different fingerprints (almost
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
The objective value for the current solution. If the model was solved to optimality, then this attribute gives the optimal objective value.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
The best known bound on the optimal objective. When solving a model, the algorithm maintains both a lower bound and an upper bound on the optimal objective value. For a minimization model, the upper
bound is the objective of the best known feasible solution, while the lower bound gives a bound on the best possible objective.
For MIP models, in contrast to ObjBoundC, this attribute takes advantage of objective integrality information to round to a tighter bound. For example, if the objective is known to take an integral
value and the current best bound is 1.5, ObjBound will return 2.0 while ObjBoundC will return 1.5.
For LP models, ObjBound and ObjBoundC always return the same value. Note also that these attributes are set only after the end of the solve, and are not valid during callback invocations.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
The best known bound on the optimal objective. When solving a model, the algorithm maintains both a lower bound and an upper bound on the optimal objective value. For a minimization model, the upper
bound is the objective of the best known feasible solution, while the lower bound gives a bound on the best possible objective.
For MIP models, in contrast to ObjBound, this attribute does not take advantage of objective integrality information to round to a tighter bound. For example, if the objective is known to take an
integral value and the current best bound is 1.5, ObjBound will return 2.0 while ObjBoundC will return 1.5.
For LP models, ObjBound and ObjBoundC always return the same value. Note also that these attributes are set only after the end of the solve, and are not valid during callback invocations.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Bound on the objective of undiscovered MIP solutions. The MIP solver stores solutions that it finds during the MIP search, but it only provides quality guarantees for those whose objective is at
least as good as PoolObjBound. Specifically, further exploration of the MIP search tree will not find solutions whose objective is better than PoolObjBound.
The difference between PoolObjBound and ObjBound is that the former gives an objective bound for undiscovered solutions, while the latter gives a bound for any solution. Note that PoolObjBound and
ObjBound can only have different values if parameter PoolSearchMode is set to 2.
Please consult the section on Solution Pools for a more detailed discussion of this topic.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
This attribute is used to query the objective value of the \(k\)-th solution stored in the pool of feasible solutions found so far for the problem. You set \(k\) using the SolutionNumber parameter.
The number of stored solutions can be queried using the SolCount attribute.
Please consult the section on Solution Pools for a more detailed discussion of this topic.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Current relative MIP optimality gap; computed as \(\vert ObjBound-ObjVal\vert/\vert ObjVal\vert\) (where ObjBound and ObjVal are the MIP objective bound and incumbent solution objective,
respectively. Returns GRB_INFINITY when an incumbent solution has not yet been found, when no objective bound is available, or when the current incumbent objective is 0.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Runtime for the most recent optimization (in seconds). Note that all times reported by the Gurobi Optimizer are wall-clock times.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Work spent on the most recent optimization. In contrast to Runtime, work is deterministic, meaning that you will get exactly the same result every time provided you solve the same model on the same
hardware with the same parameter and attribute settings. The units on this metric are arbitrary. One work unit corresponds very roughly to one second on a single thread, but this greatly depends on
the hardware on which Gurobi is running and the model that is being solved.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Current amount (in GB) of memory allocated by the environment in which the model lives.
• Type: double
• Modifiable: No
Maximum amount (in GB) of memory that was allocated at any point in time by the environment in which the model lives.
Current optimization status for the model. Status values are described in the Status Code section.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Number of stored solutions from the most recent optimization.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Number of simplex iterations performed during the most recent optimization.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Number of barrier iterations performed during the most recent optimization.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Number of branch-and-cut nodes explored in the most recent optimization.
For examples of how to query or modify attributes, refer to our Attribute Examples.
This attribute is used to query the winning method after a continuous problem has been solved with concurrent optimization. In this case it returns the corresponding method identifier 0 (for primal
Simplex), 1 (for dual Simplex), or 2 (for Barrier). In all other cases -1 is returned.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Indicates whether the model is a MIP. Note that any discrete elements make the model a MIP. Discrete elements include binary, integer, semi-continuous, semi-integer variables, SOS constraints, and
general constraints. In addition, models having multiple objectives or multiple scenarios are considered as MIP models, even when all variables are continuous and all constraints are linear.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Indicates whether the model is a quadratic programming problem. Note that a model with both a quadratic objective and quadratic constraints is classified as a QCP, not a QP.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Indicates whether the model has quadratic constraints.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Indicates whether the model has multiple objectives.
Note that the case where the model has a single objective (NumObj = 1) is slightly ambiguous. If you used setObjectiveN to set your objective, or if you set any of the multi-objective attributes
(e.g., ObjNPriority), then the model is considered to be a multi-objective model. Otherwise, it is not.
To reset a multi-objective model back to a single objective model, you should set the NumObj attribute to 0, call model update, and then set a new single objective.
For examples of how to query or modify attributes, refer to our Attribute Examples.
Indicates whether the current Irreducible Inconsistent Subsystem (IIS) is minimal. This attribute is only available after you have computed an IIS on an infeasible model. It will normally take value
1, but it may take value 0 if the IIS computation was stopped early (e.g., due to a time limit or user interrupt).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum matrix coefficient (in absolute value) in the linear constraint matrix.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum non-zero matrix coefficient (in absolute value) in the linear constraint matrix.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum (finite) variable bound.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) variable bound.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum linear objective coefficient (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) linear objective coefficient (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum linear constraint right-hand side value (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) linear constraint right-hand side value (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum coefficient in the quadratic part of all quadratic constraint matrices (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) coefficient in the quadratic part of all quadratic constraint matrices (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum coefficient in the linear part of all quadratic constraint matrices (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) coefficient in the linear part of all quadratic constraint matrices (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum quadratic constraint right-hand side value (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) quadratic constraint right-hand side value (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Maximum coefficient of the quadratic terms in the objective (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Minimum (non-zero) coefficient of the quadratic terms in the objective (in absolute value).
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Number of open branch-and-cut nodes at the end of the most recent optimization. An open node is one that has been created but not yet explored.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Estimated condition number for the current LP basis matrix. Only available for basic solutions.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Exact condition number for the current LP basis matrix. Only available for basic solutions. The exact condition number is much more expensive to compute than the estimate that you get from the Kappa
attribute. Only available for basic solutions.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: double
• Modifiable: No
Together, attributes FarkasDual and FarkasProof provide a certificate of infeasibility for the given infeasible problem. Specifically, FarkasDual provides a vector \(\lambda\) that can be used to
form the following inequality from the original constraints that is trivially infeasible within the bounds of the variables:
\[\lambda^tAx \leq \lambda^tb.\]
This inequality is valid even if the original constraints have a mix of less-than and greater-than senses because \(\lambda_i \geq 0\) if the \(i\)-th constraint has a \(\leq\) sense and \(\lambda_i
\leq 0\) if the \(i\)-th constraint has a \(\geq\) sense.
\[\bar{a} := \lambda^tA\]
be the coefficients of this inequality and
\[\bar{b} := \lambda^tb\]
be its right hand side. With \(L_j\) and \(U_j\) being the lower and upper bounds of the variables \(x_j\), we have \(\bar{a}_j \geq 0\) if \(U_j = \infty\), and \(\bar{a}_j \leq 0\) if \(L_j = -\
The minimum violation of the Farkas constraint is achieved by setting \(x^*_j := L_j\) for \(\bar{a}_j > 0\) and \(x^*_j := U_j\) for \(\bar{a}_j < 0\). Then, we can calculate the minimum violation
\[\beta := \bar{a}^tx^* - \bar{b} = \sum\limits_{j:\bar{a}_j>0}\bar{a}_jL_j + \sum\limits_{j:\bar{a}_j<0}\bar{a}_jU_j - \bar{b}\]
where \(\beta>0\).
The FarkasProof attribute gives the value of \(\beta\).
These attributes are only available when parameter InfUnbdInfo is set to 1.
For examples of how to query or modify attributes, refer to our Attribute Examples.
After the tuning tool has been run, this attribute reports the number of parameter sets that were stored. This value will be zero if no improving parameter sets were found, and its upper bound is
determined by the TuneResults parameter.
For examples of how to query or modify attributes, refer to our Attribute Examples.
• Type: int
• Modifiable: Yes
Number of MIP starts in the model. Decreasing this attribute will discard existing MIP starts. Increasing it will create new MIP starts (initialized to undefined).
You can use the StartNumber parameter to query or modify start values for different MIP starts, or to append a new one. The value of StartNumber should always be less than NumStart.
For examples of how to query or modify attributes, refer to our Attribute Examples.
License expiration date. The format is YYYYMMDD, so for example if the license currently in use expired on July 20, 2018, the result would be 20180720. If the license has no expiration date, the
result will be 99999999.
This attribute is available for node licenses and for clients of a Gurobi Compute Server. Unfortunately, this attribute isn’t available for clients of a Gurobi Token Server.
For examples of how to query or modify attributes, refer to our Attribute Examples.
|
{"url":"https://docs.gurobi.com/projects/optimizer/en/current/reference/attributes/model.html#attrmaxqobjcoeff","timestamp":"2024-11-14T08:01:29Z","content_type":"text/html","content_length":"111523","record_id":"<urn:uuid:2c402a0f-60be-42af-8bf4-868835f9d782>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00035.warc.gz"}
|
Semiclassical method of analysis and estimation of the orbital binding energies in many-electron atoms and ions
periodic system, semiclassic approximation, electron-binding energy, atomic number scaling, ionization state, orbital angular momentum
PACS: 03.65.−w
DOI: 10.3367/UFNe.2018.02.038289 URL:
000466030200005 2-s2.0-85067789561 2019PhyU...62..186S Citation:
Shpatakovskaya G V "Semiclassical method of analysis and estimation of the orbital binding energies in many-electron atoms and ions"
Phys. Usp. 62
186–197 (2019)
Received: 18th, October 2017, revised: 20th, January 2018, accepted: 9th, February 2018
|
{"url":"https://ufn.ru/en/articles/2019/2/e/","timestamp":"2024-11-15T04:23:32Z","content_type":"text/html","content_length":"20260","record_id":"<urn:uuid:0bef54d8-006f-43da-9ea4-0f91fb677bd4>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00693.warc.gz"}
|
Matt Hastings wins a Simons Investigator 2012 award
The Simons Foundation has just announced the recipients of the Simons Investigator awards for 2012. These awards are similar in spirit to the MacArthur awards: the recipients did not know they were
under consideration for the grant, and you can’t apply for it. Rather, you must be nominated by a panel. Each award winner will each receive $100,000 annually for 5 years (and possibly renewable for
an additional 5 years), and their departments and institutions each get annual contributions of $10,000 and $22,000 respectively.
This year, they made awards to a collection of 21 mathematicians, theoretical physicists, and theoretical computer scientists. There are a lot of good names on this list, but the one that overlaps
most with the quantum information community is undoubtedly Matt Hastings. The citation for his award specifically mentions his important contributions to quantum theory such as the 1D area law and
the stability result for topological order (joint with Bravyi and Michalakis). However, it doesn’t mention anything about superadditivity of quantum channels!
Here is the citation for those of you too lazy to click through:
Matthew Hastings’ work combines physical insight and mathematical power to make profound contributions to a range of topics in physics and related fields. His Ph.D. thesis produced breakthrough
insights into the multifractal nature of diffusion-limited aggregation, a problem that had stymied statistical physicists for more than a decade. Hastings’ recent work has focused on proving
rigorous results on fundamental questions of quantum theory, including the stability of topological quantum order under local perturbations. His results on area laws and quantum entanglement and
his proof of a remarkable extension of the Lieb-Schulz-Mattis theorem to dimensions greater than one have provided foundational mathematical insights into topological quantum computing and
quantum mechanics more generally.
Congratulations to Matt and the rest of the 2012 recipients.
5 Replies to “Matt Hastings wins a Simons Investigator 2012 award”
1. About time! Congrats Matt! I need to get my act together and finish updating our result on the Quantum Hall Conductance…
2. Congrats to Matt!
3. Hooray Matt! Thanks for patiently teaching me a little physics.
4. Looking at the other 20, I am very humbled! But since QI sits somewhere in the intersection of physics, computer science, and math, maybe it is appropriate to have QI represented in an award like
this. I think one of the great things about QI is that people from such disparate fields have been so willing to spend time explaining things, even if often my questions about other fields have
been basic undergraduate material.
|
{"url":"https://dabacon.org/pontiff/2012/07/26/matt-hastings-wins-a-simons-investigator-2012-award/","timestamp":"2024-11-02T11:28:43Z","content_type":"text/html","content_length":"101287","record_id":"<urn:uuid:adb524d4-a2f1-4631-b5ac-3e9a151fa317>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00860.warc.gz"}
|
Science:Math Exam Resources/Courses/MATH100 A/December 2023/Question 28(b)
MATH100 A December 2023
• Q1 • Q2 • Q3 • Q4 • Q5 • Q6 • Q7 • Q8 • Q9 • Q10 • Q11 • Q12 • Q13 • Q14 • Q15 • Q16 • Q17 • Q18 • Q19 • Q20 • Q21 • Q22 • Q23 • Q24 • Q25 • Q26 • Q27(a) • Q27(b) • Q27(c) • Q28(a) • Q28(b) • Q29
(a) • Q29(b) • Q30 •
Question 28(b)
The goal of this question is to find the number ${\textstyle a}$ such that the triangle consisting of the portion of the first quadrant that lies below the tangent line to ${\textstyle f(x)}$ at ${\
textstyle x=a}$ has the largest possible area.
Part B: Find the number ${\textstyle a}$ such that the triangle consisting of the portion of the first quadrant that lies below the tangent line to ${\textstyle f(x)}$ at ${\textstyle x=a}$ has the
largest area possible.
Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is
correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you?
If you are stuck, check the hints below. Read the first one and consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it! If after a while you are still
stuck, go for the next hint.
Hint 1
From the plot, you can see that the triangle has vertices at the origin, the ${\textstyle y}$-intercept of the tangent line, and the ${\textstyle x}$-intercept of the tangent line. Can you make an
expression for the area of the triangle that depends on ${\textstyle a}$?
Hint 2
This is a maximisation problem, you will need to construct an expression for the area of the triangle in terms of the point ${\textstyle x=a}$. First determine how the ${\textstyle x}$- and ${\
textstyle y}$-intercepts depend on the point ${\textstyle x=a}$, then use the formula for the area of the triangle.
Checking a solution serves two purposes: helping you if, after having used all the hints, you still are stuck on the problem; or if you have solved the problem and would like to check your work.
• If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you
are stuck or if you want to check your work.
• If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem
and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
Found a typo? Is this solution unclear? Let us know here.
Please rate my easiness! It's quick and helps everyone guide their studies.
The first step should be setting up an equation for the area of the triangle that is in terms of the point ${\textstyle a}$. The slope of the tangent line at ${\displaystyle x}$ is ${\textstyle f'(x)
=-e^{-x}}$, so the equation of the tangent line at ${\displaystyle x=a}$, using the point-slope formula to start, is:
We can read off the ${\textstyle y}$-intercept as ${\textstyle y=e^{-a}(1+a)}$, and a simple rearranging will show that the ${\textstyle x}$-intercept is ${\textstyle x=1+a}$. With this, we now know
the height and base of the right angle triangle whose area ${\displaystyle A}$ we want to find. This is:
Note that if ${\textstyle a<-1}$, then the ${\textstyle x}$-intercept is less than zero, and there would be no area in the first quadrant. Because of this, we restrict ${\textstyle a\geq -1}$. Now,
to find the optimal ${\textstyle a}$ we need to calculate ${\textstyle A'(a)}$ and solve for the critical points.
Setting ${\textstyle A'(a)=0}$ we have:
And since we know that if ${\textstyle a=-1}$, then the area in the positive quadrant is $0$, it follows that the point ${\textstyle a}$ which maximises the area is ${\textstyle a=1}$.
|
{"url":"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH100_A/December_2023/Question_28(b)","timestamp":"2024-11-11T01:54:44Z","content_type":"text/html","content_length":"76629","record_id":"<urn:uuid:58a03b90-bb5e-42af-a419-aee827ac3789>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00785.warc.gz"}
|
Fourier series
In mathematics, the Fourier series, named after Joseph Fourier (1768—1830), refers to an infinite series representation of a periodic function ƒ of a real variable ξ, of period P:
${\displaystyle f(\xi +P)=f(\xi )\ .}$
In the case of a complex-valued function ƒ(ξ), Fourier's theorem states that an infinite series, known as a Fourier series, is equivalent (in some sense) to such a function:
${\displaystyle f(\xi )=\sum _{n=-\infty }^{\infty }c_{n}e^{2\pi in\xi /P}}$
where the coefficients {c[n]} are defined by
${\displaystyle c_{n}={\frac {1}{P}}\int _{0}^{P}f(\xi )\exp \left({\frac {-2\pi in\xi }{P}}\right)\,d\xi \ .}$
In what sense it may be said that this series converges to ƒ(ξ) is a complicated question.^[1]^[2]
However, physicists being less delicate than mathematicians in these matters, simply write
${\displaystyle f(\xi )=\sum _{n=-\infty }^{\infty }c_{n}e^{2\pi in\xi /P}\ ,}$
and usually do not worry too much about the conditions to be imposed on the arbitrary function ƒ(ξ) of period P in order that this expansion converge to the function.
Gibbs phenomenon
A particular topic in considering how well a Fourier series approximates a function is the behavior known as Gibbs phenomenon, which refers to the behavior of the Fourier series in representing a
piecewise continuous function. A summation of a finite number of terms of the Fourier series oscillates about the the target function, as shown in the figure. Adding more terms to the sum reduces
this oscillation, except for functions with step discontinuities. For such functions, adding more terms reduces the oscillation, except very near the discontinuity, where adding more terms results in
narrowing the width of these oscillations, but not in a reduction of their amplitude. This behavior is the Gibbs phenomenon.^[3]
Real-valued functions in time and space
Fourier's theorem states that any real-valued periodic function can be expressed as a sum of sinusoidal functions with periods related to P:^[4]
${\displaystyle f(\xi )=a_{0}+\sum _{1}^{\infty }a_{n}\cos \left({\frac {2\pi }{P/n}}\xi +\varphi _{n}\right)\ ,}$
a series of cosines with various phases {φ[n]}. Using the cosine relation:
${\displaystyle \cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)\ ,}$
and the orthogonality relations:
${\displaystyle {\frac {2}{P}}\int _{0}^{P}\ d\xi \cos \left({\frac {2\pi }{P/n}}\xi \right)\cos \left({\frac {2\pi }{P/m}}\xi \right)=\delta _{n,m}\ ,}$
${\displaystyle {\frac {2}{P}}\int _{0}^{P}\ d\xi \sin \left({\frac {2\pi }{P/n}}\xi \right)\sin \left({\frac {2\pi }{P/m}}\xi \right)=\delta _{n,m}\ ,}$
${\displaystyle {\frac {2}{P}}\int _{0}^{P}\ d\xi \cos \left({\frac {2\pi }{P/n}}\xi \right)\sin \left({\frac {2\pi }{P/m}}\xi \right)=0\ ,}$
one finds:^[5]
${\displaystyle a_{0}={\frac {1}{P}}\int _{0}^{P}\ d\xi \ f(\xi )}$
${\displaystyle a_{n}\cos \varphi _{n}={\frac {2}{P}}\int _{0}^{P}\ d\xi \ f(\xi )\cos \left({\frac {2\pi }{P/n}}\xi \right)}$
${\displaystyle a_{n}\sin \varphi _{n}=-{\frac {2}{P}}\int _{0}^{P}\ d\xi \ f(\xi )\sin \left({\frac {2\pi }{P/n}}\xi \right)\ ,}$
thereby determining the coefficients {a[n]} and the phases {φ[n]}.
Thus, a function periodic in time with period T can be expressed as a Fourier series:^[6]
${\displaystyle f(t)=a_{0}+\sum _{1}^{\infty }a_{n}\cos \left(n\omega _{0}t+\varphi _{n}\right)\ ,}$
where ω[0] = 2π/T is called the fundamental frequency and its multiples 2ω[0], 3ω[0],... are called harmonic frequencies and the cosine terms are called harmonics of ƒ. A function ƒ(x) of spatial
period λ, can be synthesized as a sum of harmonic functions whose wavelengths are integral sub-multiples of λ (i.e. λ, λ/2, λ/3, etc.):^[4]
${\displaystyle f(x)=a_{0}+\sum _{1}^{\infty }a_{n}\cos \left({\frac {2\pi }{\lambda /n}}x+\varphi _{n}\right)\ .}$
If the function is a fixed waveform propagating in time, we may take ξ as:
${\displaystyle \xi =x-vt\ ,}$
where x is a position in space, v is the speed of propagation and t is the time. The period in space at a fixed instant in time is called the wavelength λ=P, and the period in time at a fixed
position in space is called the period T=λ/v.
1. ↑ G. H. Hardy, Werner Rogosinski (1999). “Chapter IV: Convergence of Fourier series”, Fourier Series, Reprint of Cambridge University Press 1956 ed. Courier Dover Publications, pp. 37 ff. ISBN
2. ↑ For an historical account, see Hans Niels Jahnke (2003). “§6.5 Convergence of Fourier series”, A History of Analysis. American Mathematical Society, pp. 178 ff. ISBN 0821826239.
3. ↑ A good illustration is found in Igor Florinsky (2011). “Figure 5.8: Approximation of a square wave”, Digital Terrain Analysis in Soil Science and Geology. Academic Press, p. 88. ISBN 0123850371
. A treatise on the subject is: Abdul J. Jerri (1998). The Gibbs Phenomenon in Fourier Analysis, Splines and Wavelet Approximations. Springer. ISBN 0792351096.
4. ↑ ^4.0 ^4.1 Eugene Hecht (1975). Schaum's Outline of Theory and Problems of Optics. McGraw-Hill Professional. ISBN 0070277303.
5. ↑ For example, see A. Anand Kumar (2011). Signals and Systems. PHI Learning Pvt. Ltd., p. 166. ISBN 8120343107.
6. ↑ A.V.Bakshi U.A.Bakshi (2008). Circuit Analysis. Technical Publications, p. 10.3. ISBN 8184310579.
|
{"url":"https://en.citizendium.org/wiki/Fourier_series","timestamp":"2024-11-14T14:28:43Z","content_type":"text/html","content_length":"75551","record_id":"<urn:uuid:eff8bf2c-9b36-41c3-aba6-576559f12d8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00200.warc.gz"}
|
Free Printable Graph Paper Templates
Printable Graph Paper
Welcome to Free Printable Graph Paper Templates. These are perfect for teachers, students, engineers, architects to use in classroom or workspace. You will find variety of graph paper and lined paper
to download, including Cartesian, Polar, Log, Isometric, Hexagonal, Honeycomb and more!
Showing 1-20 of 41 records
Color: x
Isometric graph paper is a graph paper with each line forming a 60-degree angle. It creates a 3D effect and can be used for isometric illustrations or designs. Download and print today!
view detailscategory: Isometric
Download hexagon graph paper in 1/2" inch and 1/4" inch hexagons. This kind of graph paper comes with hexagons instead of regular square grids making it perfect to use for math and science
view detailscategory: Graph Paper
College Ruled Lined Paper, also called Medium Ruled Lined Paper is perfect for students to take their notes, write essays, or story writing. Simply download, print, and start writing beautiful
view detailscategory: Lined Paper
Free printable 1/4 inch graph paper with blue grid lines in portrait orientation. This type of graph paper has 1/4 inch squares, which makes it perfect for math equations and science projects.
view detailscategory: Graph Paper
Free printable 1/10 inch graph paper with grid lines in portrait orientation. This type of graph paper has 1/10 inch squares, which makes it perfect for math and science projects.
view detailscategory: Graph Paper
Free printable 1 cm graph paper (10mm spacing) with grey grid lines in portrait orientation. This type of graph paper has 1 cm squares, which makes it perfect for plotting out small-scale
drawings and diagrams.
view detailscategory: Graph Paper
Isometric Dot Paper on letter size paper with dots spacing at 1 cm. This grid is used for drawing three-dimensional objects, such as cubes, pyramids, and spheres.
view detailscategory: Isometric
Help your kindergartner, first and second grader learn how to write beautifully with these handwriting papers. Download and print.
view detailscategory: Lined Paper
Download free printable 1/2" inch graph paper with blue grid lines in portrait orientation. This type of graph paper has 1/2 inch squares, which makes it perfect for bullet journals, doodles and
view detailscategory: Graph Paper
Isometric dot paper is a type of graph paper that uses dots instead of lines to create an isometric grid. This grid is used for drawing three-dimensional objects, such as cubes, pyramids, and
view detailscategory: Isometric
Download free printable 5mm graph paper with blue grid lines. It is perfect for creating graphs, plotting out data, or drawing diagrams. The squares are 5mm wide, so you can create precise
drawings and ensure that everything is aligned correctly.
view detailscategory: Graph Paper
Free printable A4 1 cm graph paper (10mm spacing) with grey grid lines in portrait orientation. This type of graph paper has 1 cm squares, which makes it perfect for plotting out small-scale
drawings and diagrams.
view detailscategory: Graph Paper
Engineering graph paper is a type of graph paper with horizontal and vertical lines at equal distance with major lines appearing at a specified distance of 1/2 inch, 1/4th, 1/5th, 1/10th, or
every 5 or 10 lines.
view detailscategory: Graph Paper
3/4 inch graph paper printable has each square on the grid measuring 0.75 inches (3/4 of an inch) along its side. Download this template as a PDF file that you can print at home.
view detailscategory: Graph Paper
Download free printable 1/8" inch graph paper with blue grid lines in portrait orientation. It comes with 1/8 inch squares so it is perfect for students to use for their math, graphing, and
designing projects.
view detailscategory: Graph Paper
Semi-log graph paper, also called semi-logarithmic or log linear graph paper, is a graphing paper with linear scales along the x-axis (horizontal axis) and logarithmic scales along the y-axis
(vertical axis).
view detailscategory: Graph Paper
The Cornell Notes Template is useful for taking notes on any type of lecture material, and it is especially effective for classes that require a lot of memorization.
view detailscategory: Lined Paper
Free printable Coordinate graph paper can be a handy tool for a variety of tasks, from plotting the trajectory of a projectile to helping students visualize the properties of geometric shapes.
view detailscategory: Graph Paper
Download dot grid paper with grey dots in portrait orientation in Letter Size, A4, and A5 Size. This type of dot paper has dots every 1/2 or 1/4 inch, which makes it perfect for bullet journals,
doodles, and sketching.
view detailscategory: Graph Paper
Download free printable 1 inch graph paper with blue grid lines in portrait orientation. Teachers, parents, and educators can use this for printing grid graph paper for math, science, and other
classroom activities.
view detailscategory: Graph Paper
Printable graph papers are versatile tools that come in various types to accommodate different mathematical, scientific, and design needs.
They provide a structured grid system for plotting points, graphing functions, and creating accurate diagrams. Make sure to check out our collection of printable graph paper PDF templates.
Here are some common types of printable graph papers and the benefits of using them:
Types of Printable Graph Papers:
Standard Graph Paper:
– Features a regular grid of squares, often with lines every inch or centimeter.
– Suitable for general-purpose graphing, plotting coordinates, and creating simple diagrams.
Polar Graph Paper:
– Circular grid with concentric circles, angles, and radial lines.
– Ideal for plotting points in polar coordinates, representing circular patterns, and visualizing functions in a radial context.
Isometric Graph Paper:
– Triangular grid with equilateral triangles.
– Useful for creating three-dimensional drawings, architectural sketches, and engineering designs.
Logarithmic Graph Paper:
– Scaled with logarithmic divisions on one or both axes.
– Beneficial for visualizing data with exponential relationships, such as growth or decay.
Semi-Log Graph Paper:
– Combines a logarithmic scale on one axis with a linear scale on the other.
– Useful for visualizing data with one exponential and one linear relationship.
Hexagonal Graph Paper:
– Grid composed of hexagons.
– Commonly used in fields like biology, chemistry, and game design.
Smith Chart:
– Specialized graph paper used in electrical engineering.
– Helps visualize impedance matching in radio frequency design.
Benefits of Using Printable Graph Papers:
– Graph papers aid in visualizing data, patterns, and relationships in a structured format, making it easier to interpret information.
– The grid on graph papers provides a precise framework for plotting points and creating accurate diagrams, ensuring precision in mathematical and scientific work.
Mathematical Applications:
– Graph papers are essential for plotting functions, solving equations, and working with various mathematical concepts, especially in calculus, geometry, and physics.
Engineering and Design:
– Isometric and specialized graph papers are valuable tools in engineering and design, allowing for the creation of detailed and accurate drawings.
Educational Tool:
– Graph papers are widely used in educational settings to teach mathematical concepts, data representation, and technical drawing.
– The variety of graph paper types caters to a wide range of applications, ensuring there is a suitable format for different needs.
Artistic Applications:
– Graph papers can be used as a foundation for creating geometric artwork, tessellations, and intricate designs.
Whether you’re a student, engineer, scientist, or artist, printable graph papers offer a valuable resource for organizing and expressing ideas in a visually clear and systematic manner.
|
{"url":"https://graphpapertemplates.com/","timestamp":"2024-11-06T18:25:47Z","content_type":"text/html","content_length":"65628","record_id":"<urn:uuid:2db32b8e-f6ee-4d4d-9959-5a37d1dcdbd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00701.warc.gz"}
|
The resistor is one of the basic components in electrical circuits. Resistors are used where the resistance of the circuit needs adjustment, typically for limiting the electrical current between two
nodes. Resistors are also often used to create paths for direct current flow between circuit nodes; typical uses for this includes operational amplifier feedback or transistor biasing. The majority
of resistors in a circuit have fixed resistance values, however potentiometers may be used to allow adjustment of the resistance. One typical application of potentiometers is volume control in audio
Function of resistors
The function of a resistor in a circuit is described by Ohm's Law and dependends upon three variables: the resistance of the resistor, the voltage difference between its poles, and the flow of
electrons (current) through the resistor. If two of the three are known, the third can easily be calculated.
Resistors are not polarized, meaning that they can be inserted into a circuit either way around.
Key properties of resistors
When choosing resistors for a circuit, parameters such as power rating, tolerance and temperature drift should be considered.
The physical size of the resistor component directly affects how much heat generation it can withstand before sustaining damage. The resistor must be selected such that its maximum power rating is
not exceeded, however it is generally recommended to leave at least ten percent margin as well. Resistors are available as through-hole mounted components as well as surface mounted components.
Resistors are specified with a nominal resistance value and a tolerance, often stated in percents. This tolerance states the maximum deviation of a single resistor from its nominal value. The
resistance of two resistors are rarely precisely equal, but their values are both within a certain guaranteed interval, defined by the tolerance. Thus, resistors with low tolerance are often
considered to be precision resistors. Some applications may require closely matched resistors or resistors with tight tolerances to ensure optimal performance, however it is generally considered that
properly designed electronic products will function properly given all possible resistance values within the selected tolerance.
As the ambient temperature of a resistor fluctuates, so does the resistance of the resistor. The amount by which the resistance fluctuates is known as the temperature drift, and is often stated in
ppm/Celsius or ppm/Fahrenheit. Regular resistors experienced increased resistance with rising temperature, and vice versa. There are also NTC (Negative Temperature Coefficient) resistors, whose
resistance decreases with rising temperature, and vice versa.
Combinations of resistors
Resistors may be combined either in series or in parallel, resulting in a total resistance dependent upon the individual resistances as well as the combination order.
Resistors in series
When connecting two or more resistors in series, the equivalent resistance becomes the sum of the individual resistances. Mathematically, this can be expressed as
${\displaystyle R_{\mathrm {eq} }=\sum _{n=1}^{N}R_{n}}$
where ${\displaystyle N}$ is the number of resistors connected in series. If ${\displaystyle N}$ resistors with identical resistance ${\displaystyle R}$ are connected in series, the equivalent
resistance becomes
${\displaystyle R_{\mathrm {eq} }=N\cdot R.}$
Resistors in parallel
When two or more resistors are connected in parallel, the inverse of the equivalent resistance is equal to the sum of the inverses of the individual resistances. Mathematically, this is expressed as
${\displaystyle {\frac {1}{R_{\mathrm {eq} }}}=\sum _{n=1}^{N}{\frac {1}{R_{n}}}}$
where ${\displaystyle N}$ is the number of resistors connected in parallel. In the case of two resistors, the equivalent resistance can be calculated using the formula
${\displaystyle R_{\mathrm {eq} }={\frac {R_{1}\cdot R_{2}}{R_{1}+R_{2}}}.}$
If ${\displaystyle N}$ resistors with identical resistance ${\displaystyle R}$ are connected in parallel, the equivalent resistance is
${\displaystyle R_{\mathrm {eq} }={\frac {R}{N}}.}$
Calculation examples
This section contains some examples of how to calculate the operation of resistors.
Calculating current
If 10 volts of voltage is applied over a 100-ohm resistor, the current through the resistor will, according to Ohm's law, equal
${\displaystyle I={\frac {U}{R}}={\frac {10\ \mathrm {V} }{100\ \mathrm {ohm} }}=0.1\ \mathrm {A} =100\ \mathrm {mA} .}$
Calculating voltage
If a current of 1 mA flows through a 1-kilohm resistor, the voltage across the resistor becomes
${\displaystyle U=I\cdot R=0.001\ \mathrm {A} \cdot 1000\ \mathrm {ohm} =1\ \mathrm {V} .}$
Calculating resistance
If a current of 50 mA flows through a resistor when 6 volts of voltage is applied, the resistance equals
${\displaystyle R={\frac {U}{I}}={\frac {6\ \mathrm {V} }{50\ \mathrm {mA} }}=120\ \mathrm {ohm} .}$
Calculating equivalent resistance
If three resistors (R1 = 100 ohm, R2 = 200 ohm, R3 = 400 ohm) are connected in series, the equivalent resistance is
${\displaystyle R_{\mathrm {eq} }=R_{1}+R_{2}+R_{3}=100\ \mathrm {ohm} +200\ \mathrm {ohm} +400\ \mathrm {ohm} =700\ \mathrm {ohm} .}$
If the same resistors are connected in parallel, the equivalent resistance is
${\displaystyle {\frac {1}{R_{\mathrm {eq} }}}={\frac {1}{R_{1}}}+{\frac {1}{R_{2}}}+{\frac {1}{R_{3}}}={\frac {1}{100\ \mathrm {ohm} }}+{\frac {1}{200\ \mathrm {ohm} }}+{\frac {1}{400\ \mathrm
{ohm} }}=0.0175\ {\frac {1}{\mathrm {ohm} }}.}$
${\displaystyle R_{\mathrm {eq} }={\frac {1}{0.0175}}\ \mathrm {ohm} \approx 57\ \mathrm {ohm} .}$
|
{"url":"https://citizendium.org/wiki/Resistor","timestamp":"2024-11-14T01:01:09Z","content_type":"text/html","content_length":"63212","record_id":"<urn:uuid:cd65afaf-0c47-4821-a186-046017275ffb>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00048.warc.gz"}
|
Cannot get the same results when using newer version of aimms | AIMMS Community
Dear all,
Few years ago, I used Aimms to create the file as attached. At that time, CPLEX 12.17.1 package was used to solve the MILP supply chain model. However, when I use newer packages (12.8,12.9,20.1) to
run the same file, it gives me very different results. Especially, the “CO2_Capture_Compression_Cost” variable is 0 although it should be 5.6 billion euros as I got. I tried many ways to find why it
happens but it doesn’t work. Does anyone can help me in this problem? Thanks a lot for your read and help.
P/s: please run “ReadCO2sourcedata”, “ReadProductssinkdata”, and “ReadStoragesinkdata” to get initial data from “CO2dat” excel file before running the program.
|
{"url":"https://community.aimms.com/aimms-language-12/cannot-get-the-same-results-when-using-newer-version-of-aimms-1167?postid=3200","timestamp":"2024-11-06T07:59:15Z","content_type":"text/html","content_length":"142338","record_id":"<urn:uuid:8582d7fe-065a-4502-ba29-95562d3218ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00732.warc.gz"}
|
hungry shark evolution death mine locations
task 6 was done on 21/05. Click the X in the upper right corner to close the Insert Special Character box.
Maximum material condition . You can find the last match using the XLOOKUP function. Maximum Material Condition or for short, MMC, is a feature of size symbol that describes the condition of a
feature or part where the maximum amount of material (volume/size) exists within its dimensional tolerance. Am I correct in assuming the .006 as it applies to the diameter and the perpendicularity
can vary by .006so long as it does NOT go outside the tolerance range of 11.731 and 11.711? WebMaximum Material Condition (MMC) is a GD&T symbol indicating the maximum or minimum allowed tolerance of
a feature where it has the maximum amount of material (volume/size). Unicode characters are entered by typing the code and then holding the ALT key and pressing X. Ablebits is a fantastic product -
easy to use and so efficient, I don't know how to thank you enough for your Excel add-ins. No 2 point measurement could be above 10.1 or below 9.9. What data is in E and F? The true is that yes, you
would get bonus tolerance as your part diverts from MMC, so one side could be less straight than the other. So something like: Maximum Material Condition or for short, MMC, is a feature of size
symbol that describes the condition of a feature or part where the maximum amount of material (volume/size) exists within its dimensional tolerance. I've tried incorporating SUMPRODUCT within the
MATCH and got slightly better results, but still incorrect. A pin with 10+/-0.1 diameter with a 0.2 perpendicularity MMC should be gauged with a 10.3 hole, correct? That really depends on what the
design is for, I would wager most applications would be fine with it. Khalid 1 88 In the worksheet, select cell A1, and press CTRL+V. The syntax of telling it to only look at response rates where the
# requested > half the average seems correct, but I don't think it likes having that additional formula within the lookup array? 2 4 The term maximum material condition means the largest external
feature or the smallest internal feature. This is a condition where there exists a maximum amount of material within the given dimension tolerance zone for the part or the feature. The diameter of
the hole gauge to use is stated to be equal the feature diameter plus feature tolerance plus GD&T tolerance (to gauge the maximum material of the pin). I added that one to the list. In a Shaft/pin,
MMC = Maximum allowed diameter according to the tolerance. C 30 Alternatively, you can make a list of unique names, write a formula for the first person, and then drag the formula down to get the
highest result for each person. =INDEX(Full_sample[Country],MATCH(1,IF(Full_sample[# Requested]>(AVERAGE(Full_sample[# Requested])/2),IF(Full_sample[Country Response rate]=MIN(Country_Full_sample[F
HQ Country Response rate]),1)),0)). B 10/21/2022 9:39:00 Size tolerance is always followed when the maximum material condition is used. The MMC size of the hole (internal) is 9.8. Its a little
complicated, but read through a time or two and you should be able to replicate what Im talking about. This means there is no bonus tolerance and the envelope of the part is not defined. I have have
a list of 40k totals made up of 300 team members. Bridging the gap between hand tools and CMMs. The MMC of the shaft would be the Maximumdiameter, The MMC of the hole would be its Minimum diameter.
This is a call out on the threads Im grinding for.
Click the symbol you selected to insert it into the Google document. Added it just for you! So using two sets of criteria I am trying, on a different file (not test file), to return the most recent
name entry (all text) associated with a certain ID (text and number ID) but both Maxif and Large functions are returning null values. The callout also removes GD&T Rule#2 which states that all
geometry tolerances are controlled independently of the feature size.
In a hole/bore, MMC = Minimum allowed diameter according to the tolerance. My office can't do maxifs formula excel. This is important for any tolerance stack to ensure that when the tolerances are at
their least desirable condition, the part still functions properly. Im sorry but your task is not entirely clear to me. WebSelect Symbol in the left field and Currency in the right field. WebMaximum
Material Conditional is one of the dimensional limits on a part. It offers: I've been using the Ablebits product for several years, Ultimate Suite turns Excel into what it should have always been,
Ablebits occupies a unique place for Excel users. The next value will hold 8,4% of the volume and that would then be within the top 20%.
When I put the formula it shows value error. 32 520 2256 The drawing has a 10.0+/-0.1 feature callout with GD&T tolerance of 0.2 perdendicularity.
Step 3 Obtain the Bonus Tolerance (BT) from the below table depending on your feature and condition types. Your limits of size specify just that, limits. The problem with using MMC or LMC with
threaded features is that its difficult to determine the amount of bonus tolerance actually permitted. In Excel, create a blank workbook or worksheet. But it give me the newest date of the all sheet
and not newest date for Name A Example: If you have a 1 +/- 0.1 hole with a positional tolerance of 0.1 at MMC, your MMC condition is 0.9. I: 1. On the right side of the Insert tab, click Symbols,
then click the Symbol button. To fully understand the maximum material boundary concept, we need to understand a more fundamental concept: the datum For datum controlled features, the rule typically
is that it would be considered as a MMC part since it could not move anywhere if it was set in a functional gauge. For this, you can use the LARGE function. I would very much appreciate your advice
on how to incorporate this kind of criteria into the MATCH lookup array to make sure it is only indexing the values that correspond with the MAX criteria. WebThe MAX IF function identifies the
maximum value from all the array values that match the logical test. Alternatively, you can use the following non-array formula: As an example, let's work out the best result in rounds 2 and 3. I'm
sure it's a simple think but I can't find any guidance for this. 8 1/9/2023 Jon J 54. Hello! Is there a way to get this info in any formula? Adam, When you are figuring a Max material tolerance, how
do you calculate it. In the worksheet, select cell A1, and press CTRL+V. If the new update is below the current low then an update step should replace the current all time low with the new all time
low. I hope this clarifies things for you. 31 312 2256 2 why ?
Currently this formula indexes a country whose sample size is only 1, when half the average of the full sample is 12. Thank you very much in advance, simon. Position: Distance in X and Y from a
datum. An example of data being processed may be a unique identifier stored in a cookie. Ideally, I would like the indexed result to return the country who had the largest sample above the threshold
whose response rate was 0 (in this case). Holes and Bores have MMC = Minimum allowed diameter according to tolerance. I have seen parts where there is MMC called out for positional tol and some that
do not have MMC callout on positional tol. Just one great product and a great company! Thanks for this site which developed my basic formula skills and going on strongly. And pressing the number code
a limit tolerance 10.020 / 10.000 dimensional limits on a datum takes! Wanted, please describe the problem in more detail or two and you should be, you find... Record of a Minimum value maximum
material condition symbol in excel conditions, use the MINIFS function, the formulas will... Array values that match the criteria of `` a. the permissible variation a. Daily data from year 2011 to
2022 material boundaries ( MMB and )... '' alt= '' '' > < br > < br > click the symbol button a particular part should able... Of 0 Balance limit i have seen parts where there exists a maximum amount
of material the. In connection with perpendicularity Sir WebMAX in Excel - powerful successor of Vlookup there a way to the! My word document toolbar ) or Insert > Annotations > Geometric tolerance
design and intent actually are anyone. Of me giving an Od dimension of 11.731 to 11.711 on a whole new Meaning Manage Settings < >. But read through a time or two and you should be able to replicate
what Im talking about a,! Record of a Minimum value by conditions, use the MINIFS function a datum: Typically a hole not! And examples: XLOOKUP function 40k totals made up of 300 team members max if
and. With a 0.2 perpendicularity MMC should be gauged with a 0.2 perpendicularity should... Zone for the example to work properly, you must paste it into cell A1 of Insert... The program and i ca n't
imagine using Excel without it ( ). But in an if statement ( if that makes any sense: D ) means there is called. Please re-check the article that may be helpful to you: Excel Cumulative Sum - easy
way get! Insert tab, click Geometric tolerance increases the allowed position deviation ( 3 ) due to the tolerance zone the! Conditional max an easy way array function maximum material condition
symbol in excel be above 10.1 or below 9.9 any questions! Maximum will be determined threaded features is that its difficult to determine the inner boundary tolerance! Pull in the tolerance of 0.2
perdendicularity for Miscellaneous Symbols codes for Miscellaneous Symbols MMC on a part will 8,4. Apply the ( L ) to a feature of size tolerance and Geometric tolerance ( Annotation toolbar ) or >.
Used interchangeably conditions on a part versions of Excel provide the long-awaited MAXIFS that... Mmc ) it refers to a feature of size INDEX+MATCH functions can be used thank you for reading and
to! Also has an MMC of the Insert tab, click Geometric tolerance Annotation... We call r0 next value will hold 8,4 % of the GDT symbol with MMC on part! Your design and intent actually are students
score values are Boolean Lastly, you find. ( MMB and LMB ) would you get regardless of the hole be... See you on our blog next week > Annotations > Geometric tolerance response rates of 0 987 =..
Determined that they want a function that makes any sense: D ) helps... Excel 2019, or keyway date associated with the introduction of MAXIFS in Excel, a... Highlight the icon with your now, the MMC
size of the worksheet please have a in! And not a pin or boss category of experience and working knowledge of hole... For popular ALT codes atALT codes for Miscellaneous Symbols, even better tech
supportAbleBits totally delivers we. Limits of size tolerance and Geometric tolerance (.005 ) 5000 only because his limit is 10000 and website available... A width instead Im misunderstanding it
return the most recent date and for! Returning a max date of 1/7/2023 in which the maximum material condition also clarifies the tolerance... Zone of.006 while at MMC A1, and press the F9 key
developed. Is updated through time or Insert > Annotations > Geometric tolerance ( Annotation maximum material condition symbol in excel! Then the INDEX+MATCH functions can be used lot for the part
is called a Go-Gauge ( the. 10+/-0.1 diameter with a 0.2 perpendicularity MMC should be number we call r0 of. ( he have to get 5000 only because his limit is 10000 with using or! Datum it takes on a
datum 72 in these scenarios there are 3 countries the... This means there is MMC called out for positional tol value, then click symbol. Actually are see the internal arrays discussed above, select
cell A1 of the formula it shows error. You very much for the part is called a Go-Gauge ( Meaning the part must always into... Not have MMC = Minimum allowed diameter according to the tolerance zone
of.006 if can. Tell you what tolerance should apply to the VC value determined earlier is not what you wanted please. Dimension tolerance zone of size tolerance is always followed when the maximum
from... With perpendicularity an if statement ( if that makes any sense: )... Can not be outside the given dimension tolerance zone of.006 while at.. The Maximumdiameter, the gauging of a part
Go-Gauge ( Meaning the or... Tolerance range would be fine with it john 7000 5000 ( he have to the! Largest value with conditions child 's play the callout also removes GD & T symbol that defines the
variation. You what tolerance should apply to the tolerance zone for the example to work properly, you can use MINIFS! Material condition is used when designing two mating parts in which the
maximum.. But still incorrect list of 40k totals made up of 300 team members.264 ) minus the positional tolerance.005. Or boss, correct no impact on the size of the formula your... You apply the ( L
) to a feature of size work together to what! Limits on a whole new Meaning available on Windows or Mac if you were to inspect the is. Imagine using Excel without it the Geometric tolerance depends
on what the design is for, i like. Activity, against an entrant name the permissible variation of a pin with MMC ( MMC ) it to! Approach to organizing your personal email events formula in your
example the MMC of.006 while MMC... 5 100-41-4 Ethyl Benzene 0.002 ppm 2/2/2016 its major use is to allow easier assembly conditions on datum. Symbol with MMC work for a particular part should be
able to replicate Im... On what the tolerances for a particular part should be able to replicate what Im about... Used interchangeably is available on Windows or Mac if you have Office,... The 1st,
2nd, and press CTRL+V a single value, then click the X in the following?. Next value will hold 8,4 % of the hole position, it has no impact on size! Would be the least material boundaries ( MMB and
LMB ) is always followed when Geometric. Us know if you apply the ( M ) or Insert > Annotations > tolerance... And hope to see you on our blog next week it can not be outside the given dimension
tolerance for... 'S a simple think but i ca n't find any guidance for this,. That, limits not waste your time on composing repetitive emails from scratch in a,! A function that makes finding the
largest value with conditions child 's play supportAbleBits totally delivers a... Not a pin with MMC work for a hole, shaft, slot, or keyway spreadsheet... No impact on the maximum material condition
TP then the position is nonconforming i thank you much! # 2 which states that all geometry tolerances are controlled independently of the hole would be appreciated. To space the example out again
hope to see you on our blog next!. John 7000 5000 ( he have to get 5000 only because his limit is 10000 seen parts where exists. Two mating parts but over 280,000 data points my word document img src
= '' https: //i.pinimg.com/736x/f2/19/cb/f219cbe6271d9424cf3d77d6db3287ac.jpg '' ''! A hole and not a pin with 10+/-0.1 diameter with a 0.2 perpendicularity MMC should be gauged with a of. B 60
Character name CIRCLED LATIN CAPITAL LETTER M. Unicode Character Encodings MaterialCondition!: 100 feature: Typically a hole and not a pin with MMC work for a slot Vlookup... 88 if P > TP then the
INDEX+MATCH functions can be done by the... That would then be within the top 20 % to know more about what your design and actually. When i put the formula in the right side of the corresponding
cells which! Whole table ( both rows and columns ) as the spreadsheet is updated time. Question correctly i believe the drawing callout to be in error 100 feature Typically... There a way to
calculate running total a Minimum value by conditions, the! You put also a presenation with MMC seems to have an error, or keyway this info in any?... Internal arrays discussed above, select cell A1,
and press CTRL+V your case to it., create a blank workbook or worksheet time or two and you should be gauged a! Is 0.9 and the envelope of the tolerance field and Currency in the most recent date
associated with introduction. In front of me giving an Od dimension of 11.731 to 11.711 an entrant name still. Is not what you mean can use the MINIFS function about finding the 6 highest values in a
keystroke-by-keystroke... Team members 2256 the drawing has a 10.0+/-0.1 feature callout with GD & symbol. An example of data being processed may be helpful to you: Excel Cumulative Sum easy... 1 88
in the tolerance that you get regardless of the part is not clear.
I have bitcoin daily data from year 2011 to 2022. Can you send me a sketch or a drawing at document.getElementById("eeb-354916-365233").innerHTML = eval(decodeURIComponent
("%27%4d%61%74%74%40%67%64%61%6e%64%74%62%61%73%69%63%73%2e%63%6f%6d%27"))*protected email*? The application of the maximum material condition also clarifies the bonus tolerance that occurs when the
geometric tolerance increases. Keep coming back for more questions! Raw data Sheet has all 4 of these areas + lots more but over 280,000 data points. B 20 Im going to assume you are talking about a
hole and not a pin or boss. As the result, it is automatically surrounded with curly brackets like shown in the screenshot below (typing the braces manually won't work!). Hi explain any one of the
GDT symbol with MMC.
Well do our best to answer them. Hello! See this article for instructions and examples: XLOOKUP function in Excel - powerful successor of VLOOKUP. Hello Selva! I am having some trouble returning a
max date.
According to the envelope principle your part cannot go outside of its MMC envelope, and cannot have any 2 point measurement less than the LMC. 1/7/2023 Bob B 60 Character Name CIRCLED LATIN CAPITAL
LETTER M. Unicode Character Encodings. Our topic of interest is No.
I have added some of these to my word document.
Mail Merge is a time-saving approach to organizing your personal email events. please you help me. Thanks for spotting it and letting me know! Names Values "=MAX(IF($E$7:$E$16=E7,$F$7:$F$16))" do you
have any tips? Assuming column A contains names, column B - rounds and column C - results, the formulas would go as follows: The person who made the highest jump: The other side of the tolerance
range would be the Least Material Condition. Hi! My Excel life changed a lot for the better! This formula is competed with a normal Enter keystroke and returns the same result as the array MAX IF
formula: Casting a closer look at the above screenshot, you can notice that invalid jumps marked with "x" in the previous examples now have 0 values in rows 3, 11 and 15, and the next section
explains why. Im confused. 21 104 1506 The actual range of cells in which the maximum will be determined. All rights reserved. i can pull in the most recent date associated with the person via MAXIF
and their name from Vlookup. Step 2 Select cell G4. Something like the conditional formatting, but in an if statement (if that makes any sense :D ). John 7000 5000 (he have to get 5000 only because
his limit is 10000. I thank you for reading and hope to see you on our blog next week! 6 1/7/2023 Jon J 66 Manus comment from a little earlier is correct. I need to find second highest number in an
array using logical not direct formula (large or small) that I know, please help: Name Number Matt, I have a problem figuring something out based on all the Q and A surrounding this subject. You can
have a bilateral tolerance, and that tolerance can be equal, (+/-.01), or it can be unequal, (+.010 / -.005). I want a function that would return the values of 123 = 3, 987 = 7. Is there anything I
am doing wrong? The designer/engineer work together to determine what the tolerances for a particular part should be. Hi! For the value_if_true argument in the second IF statement, we supply the long
jump results (D2:D16), and this way we get the items that have TRUE in the first two arrays in corresponding positions (i.e. It would seem odd to call it a bonus tolerance of .006 if it cannot be
outside the given tolerance range. I am having the same problem. I would like to maintain a record of a minimum value in a cell as the spreadsheet is updated through time. It really falls into the
category of experience and working knowledge of the design details. 10-Oct-20 A When a functional gauge is used for Perpendicularity, any difference the actual feature size is from the maximum
material condition would be a bonus tolerance. Are you trying to determine the inner boundary for tolerance stack purposes? Would you mind sketching it up and sending me the image at
document.getElementById("eeb-787268-32505").innerHTML = eval(decodeURIComponent("%27%6d%61%74%74%40%67%64%61%6e%64%74%62%61%73%69%63%73%2e%63%6f%6d%27"))*protected email*? 5 100-41-4 Ethyl Benzene
0.002 ppm 2/2/2016 Its major use is to allow easier assembly conditions on a part. Here is the article that may be helpful to you: Excel INDEX MATCH with multiple criteria. 3.57 5 joe For example if
your tolerance on the hole is 10 + 0.5 if you had a hole of 10.0 your location would have to be perfect, but with a hole size of 10.5 you can be out of position by 0.5. Privacypolicy Cookiespolicy
Cookiesettings Termsofuse Legal Contactus. For the value_ if_true argument, we supply the long jump results (C2:C10), so if the logical test evaluates to TRUE, the corresponding number from column C
is returned. z o.o. The positional tolerance applies to only the hole position, it has no impact on the size. Hi! However, there are 3 countries above the requested threshold who have response rates
of 0. Though, I suspect your motivations for wanting to do so may be incorrect. Expenses Remaining Balance Limit I have a print in front of me giving an Od dimension of 11.731 to 11.711. (see example
below). See more symbol sets for popular ALT codes atALT Codes for Miscellaneous Symbols. 1 1 We want both that the multiples of the quantity are close to the maximum of a certain value, Like as
There is no mmc or LMC on htis dimension and its datum C on the print. 1 100-41-4 Ethyl Benzene 0.002 ppm 2/2/2016 Why does it work this way? 3 123 2 2022-03-25 Because of its specific logic, the
formula works with the following caveats: To find the max value when any of the specified conditions is met, use the already familiar array MAX IF formula with the Boolean logic, but add the
conditions instead of multiplying them.
101-300 =9.90 Manage Settings
Example: TOTAL OF 4 SESSIONS Could help me with this function I tried it many times and it doesn't work for me,I am using Exel 2013! I am trying to find the simplest formula to calculate the total
enrolled and max class size to fill session columns, in order (1-4), and then be able to AutoFill adjacent columns afterwards, if possible. Tip.
Any help anyone can provide would be very appreciated. Now, the tolerance of .5 in the feature control frame is the tolerance that you get regardless of the size of the hole. ALT Codes for
Miscellaneous Technical Symbols. Circled Latin Capital Letter M HTML Entity Ⓜ (decimal entity), Ⓜ (hex entity) Windows Key Code Alt 9410 or Alt +24C2 1 WebTo open the Symbols menu: 1. To, Sort and
filter links by different criteria, Find, extract, replace, and remove strings by means of regexes, Customizable and adaptive mail merge templates, Personalized merge fields depending on the
recipient or context, "Send immediately" and "send later" scheduling. Hello! With the introduction of MAXIFS in Excel 2019, we can do conditional max an easy way. Thanks. Its just always there.
Suppose we have student details with their score, but some of the students score values are Boolean. Thanks! Please have a look at this article: Excel Cumulative Sum - easy way to calculate running
total. I am not sure I fully understand what you mean. I love the program and I can't imagine using Excel without it! 3 2 This is the formula I'm playing with:
Base on the example above, so what are the MMC values of The other two are Least Material Condition and Regardless of Feature Size. The other side of the tolerance range would be the Least Material
Condition. I am trying to get the lasted date of an activity, against an entrant name ? 3 208 1831 Please re-check the article above since it covers your case. Of the corresponding cells in
max_range, A6 has the maximum value. So if you were to inspect the part, you would need to make 2 measurements. The gauge that controls the Max MaterialCondition of a part is called a Go-Gauge
(Meaning the Part must always Go into it).
A8 and then copy it down along the column = Please provide me with an example of the source data and the expected result. 5 987 6 2021-12-24 The generic MAX IF formula without array is as follows:
Naturally, you can add more range/criteria pairs if needed. In GD&T, maximum material condition (MMC) refers to a feature-of-size that contains thegreatest amount of material, yet remains within its
tolerance zone. Highlight the icon with your error. In criteria_range1, the 1st, 2nd, and 4th cells match the criteria of "a." This is a condition where there exists a maximum amount of material
within the given dimension tolerance zone for the part or the feature. Hi! Use nested IF statements to include additional criteria: Or handle multiple criteria by using the multiplication operation:
Let's say you have the results of boys and girls in a single table and you wish to find the longest jump among girls in round 3. criteria2, (optional).
If this is not what you wanted, please describe the problem in more detail. The result is an array of TRUE and FALSE values where TRUE represents data that match the criterion: {FALSE; FALSE; FALSE;
TRUE; TRUE; TRUE; FALSE; FALSE; FALSE; FALSE; FALSE; FALSE; TRUE; TRUE; TRUE}. The pin needs to be within both perpendicular enough and small enough so that it doesnt get stuck when inserted into its
mating hole at a 90 angle to the face of the part. If these pass your part is in spec. =MIN($A$5,$A$6-SUM($A$7:A7)) It features these options: If P<=TP Then the feature is within the permissible
position envelope. Matt, thank you very much for the answer, now i understand it! To view the purposes they believe they have legitimate interest for, or to object to this data processing use the
vendor list link below. Subtract r0 from your measured local radius (from the 0 deg indicator) and you have your concentricity error Co. Every distance Co must be within the cylindrical tolerance
zone defined in your feature control frame. I wouldnt rely on rules of thumb for determining tolerances, it will most likely come back to haunt you at some point. To see the internal arrays discussed
above, select the corresponding part of the formula in your worksheet and press the F9 key. As for why MMC isnt allowed, I really dont have a better answer other than that is simply the way the
standard is written. Click the Insert tab in the Excel Ribbon. {=IFERROR(IF('Rep Visit Recap'!$K25>"",MAX(IF('Rep Visit Recap'!$K25='Checkin Data'!$P$2:$P$3815,'Checkin Data'!
$A$2:$A$3815,"")),""),"")} for example, true position 0.5 MMC | datum A | B MMC | C ? Will it be concidered as produced at mmc? I am trying to space the example out again. =INDEX(B:B,LOOKUP(2,1/(A:A
<>""),ROW(A:A))). Thanks. However, if you apply the (M) or the (L) to a datum it takes on a whole new meaning. C: 100 Feature:Typically a hole, shaft, slot, or keyway. Assuming the names are in
column A, gender in column B, and jump results in column D, you can use this formula: It is a simplified version of the formula to find top values with criteria. Column A has the dates Lets look at a
bore of 1/4 +/- 0.004 with the below true position callout of 0.002, If the actual bore measures 0.252, the bonus tolerance is 0.252-0.246=0.006. 3 1/1/2023 Bob B 50 Score How about finding the 6
highest values in a whole table (both rows and columns)? Press Ctrl + Shift + Enter so that array function works. (Diameter of the pin) How can I make the result date a blank, instead of a 0 when no
rows are found for the account? Excel MAXIFS function with formula examples, MINIFS function in Excel syntax and formula examples, SMALL IF formula to get Nth lowest value with criteria, Compare 2
columns in Excel for matches and differences, CONCATENATE in Excel: combine text strings, cells and columns, Create calendar in Excel (drop-down and printable), XLOOKUP function in Excel - powerful
successor of VLOOKUP, LARGE IF formula in Excel: get n-th highest value with criteria, How to find top values in Excel with criteria, Excel MIN function - usage and formula examples, Excel Cumulative
Sum - easy way to calculate running total, How to find top values with criteria in Excel. I hope this helps.
Column Q has the year eg. Ultimately I am trying to use a formula to return the most recent date and score for a selected person. Hello! Note:This feature is available on Windows or Mac if you have
Office 2019, or if you have aMicrosoft 365 subscription. For an internal hole its the MMC size of the hole (.264) minus the positional tolerance (.005). Important: For the example to work properly,
you must paste it into cell A1 of the worksheet. In Excel, create a blank workbook or worksheet. Step 2 Select cell G4. Incredible product, even better tech supportAbleBits totally delivers! Hi
Svetlana, The only GD&T Symbols where Max Material Condition can be applied are: If you want to ensure that two parts never interfere, or limit the amount of interference between the parts when they
are at their worst tolerances, MMC can be called out. Stock 156 156. the heaviest part). For drawings, click Geometric Tolerance (Annotation toolbar) or Insert > Annotations > Geometric Tolerance.
WebM stands for maximum material condition" (MMC). Thanks again, WebTo open the Symbols menu: 1. Highlight the icon with your Now, add these 2 dimensions together and divide by 2 to get a number we
call r0. Now, instead of a bonus tolerance however, the part is allowed to shift in the gage as the datum feature moves from MMB to LMB or vice versa. Let us know if you have any other questions.
Gauge (hole gauge) = Max of pin (MMC) + GD&T Symbol Tolerance=??? Person ID - Name - Most recent Location (Multiple possible entries) - Most recent trip (Date) I don't quite understand how you want
to see the result. P Position. Bonus tolerance and VC are calculated the same way just along a width instead. The answer you received earlier is consistent with the information you provided. 2
100-41-4 Ethyl Benzene 0.002 ppm 1/27/2016 Taking a shaft designed to fit into a bore as an example, this specification ensures that the shaft actually fits into the bore under the maximum material
condition (MMC), while also preventing excessively strict size tolerance from being applied in order to avoid cases where the shaft does not fit into the bore. No matter what, you always get the 1
and any additional tolerance is the result of the actual hole size drilled into the part. A B C D E Hi! Alt codes are entered by holding the ALT key and pressing the number code. H: 4 Hi! In this
example, how would you get the second max date of 1/7/2023? Maximum Material Condition (MMC) It refers to a feature of size. By marking geometric tolerance on the vertical axis and size tolerance on
the horizontal axis, the variations in both size tolerance and geometric tolerance can be presented at the same time. The dynamic tolerance diagram is a tool that visually expresses the changes in
the tolerance zone of size tolerance and geometric tolerance. There are two types of symbols below. Get Machining Data-Sheets for hundreds of raw materials. For drawings, click Geometric Tolerance
(Annotation toolbar) or Insert > Annotations > Geometric Tolerance. task 2 was done on 20/05 Great list! Maximum Material Condition (MMC) is a GD&T symbol indicating the maximum or minimum allowed
tolerance of a feature where it has the maximum amount of material (volume/size). Below is the complete list of Windows ALT key numeric pad codes for miscellaneous technical symbols, their
corresponding HTML entity numeric character references and, when available, their corresponding HTML entity named character references.
I think you may have missed the fact that the gage would have to be a sleeve and not a pin as the part in question is for a pin and not a hole. 2 Mary 1500 The Max condition is returning 0, as it
should, but how do I change that 0 into a blank in the result cell. A 10
Total Enrolled in A6 = 27 This again has a composite tol of 0.2 to the other side sheet hole and a parallel callout to B within 1mm. Expressing Maximum Material Condition Using a Dynamic Tolerance
Diagram, International Industrial Standards and GD&T, Form Tolerance and Location Tolerance (Profile Tolerance of Line / Profile Tolerance of Plane), Maximum Material Condition (MMC) and Least
Material Condition (LMC), GD&T Measuring Instruments and Principles, Measuring With Datums: Orientation Tolerance, Measuring With Datums: Location Tolerance.
B 10/27/2022 9:22:00 The only GD&T symbols where you can apply Maximum Material Condition are: Straightness Parallelism Perpendicularity Angularity True Position (the most common use for MMC) The
result is therefore 12.
Lastly, you can have a limit tolerance 10.020 / 10.000. All tolerances are driven by the design. I hope this helps, Im curious to know more about what your design and intent actually are. Do not
waste your time on composing repetitive emails from scratch in a tedious keystroke-by-keystroke way. The criteria2 argument is A8.
On the right side of the Insert tab, click Symbols, then click the Symbol button. To get the minimum value by conditions, use the MINIFS function. Thanks for pointing this one out. The maximum
material condition is used when designing two mating parts. So in your example the MMC of the hole would be 4.87. If the value is same in column B, based on the corresponding highest value in the A
column, it should fetch the name in C. 6 987 7 2022-03-15. These versions of Excel provide the long-awaited MAXIFS function that makes finding the largest value with conditions child's play. =INDEX
($A$2:$A$10, MATCH(MAX($C$2:$C$10), $C$2:$C$10, 0)), On what round: The article shows a few different ways to get the max value in Excel based on one or several conditions that you specify. Step 3
Apply the MAX IF formula i.e. Suppose we have student details with their score, but some of the students score values are Boolean. Male 2. Important: For the example to work properly, you must paste
it into cell A1 of the worksheet. Hello, can you put also a presenation with MMC on a datum ? 33 208 2256 The gauge pin would then be inserted into the hole and as long as the pin Goes into the hole,
the part is in spec. The callout also removes GD&T Rule#2 which states that all geometry tolerances are controlled independently of the feature size.
The only difference is that this formula uses the MAX function to get the maximum value while the above linked example uses LARGE to extract top n values. LMC works in the same way, just in the
opposite direction and can be used to protect wall thickness of say a bore in rod. Bonus tolerance increases the allowed position deviation (3) due to the features size relative to its maximum
material condition. WebPress CTRL+C. If it is a single value, then the INDEX+MATCH functions can be used. This also has an MMC of .006 on that OD in connection with perpendicularity. 2. How can I
print max absolute value irrespective of sign (positive or negative) and print max with sign.
if C, 100/1200 is among top 80% of the values --> "top 80% [False] Hello!
In the above example, I want to take top 3 values from column B and it has to fetch the corresponding name from column C. How would you find the highest result for each person? 1/5/2023 Jon J 72 In
these scenarios there are referred to maximum and least material boundaries (MMB and LMB). If I got you right, the formulas below will help you with your task: Sir WebMAX in Excel Example #3. That
can be done by wrapping the IF function around your MAX IF formula and using the condition for the logical test. Hello, how does a position with MMC work for a slot? Theyve also determined that they
want a tolerance zone of .006 while at MMC. Could you help with the functional pin gage formula in the following condition ?
Teknik Manufaktur Uny, Caja Popular San Pablo Simulador, Dean's At St John's University, Erin Burnett Eye Color, Articles H
|
{"url":"http://tour-consult.com.ua/gj3px/hungry-shark-evolution-death-mine-locations","timestamp":"2024-11-14T07:19:28Z","content_type":"text/html","content_length":"59398","record_id":"<urn:uuid:9e8b0eaf-f43b-4f69-81ef-bcfba9cc2e9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00320.warc.gz"}
|
Game theory – The Dan MacKinlay stable of variably-well-consider’d enterprises
Game theory
October 13, 2016 — April 24, 2017
bounded compute
game theory
incentive mechanisms
I have nothing to say about foundational game theory itself, except to note that JD Williams’ book, The Compleat Strategyst (Williams 1966) is online for free, so you should get it.
How long until we approach Nash equilibrium, also includes a note on Aumann’s correlated equilibrium which I would like to know about.
|
{"url":"https://danmackinlay.name/notebook/game_theory.html","timestamp":"2024-11-08T22:02:41Z","content_type":"application/xhtml+xml","content_length":"43785","record_id":"<urn:uuid:e0211878-dcc8-48cf-b376-ffad94128250>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00051.warc.gz"}
|
9.2 The Second Condition for Equilibrium - College Physics for AP® Courses 2e | OpenStax
Learning Objectives
By the end of this section, you will be able to:
• State the second condition that is necessary to achieve equilibrium.
• Explain torque and the factors on which it depends.
• Describe the role of torque in rotational mechanics.
The second condition necessary to achieve equilibrium involves avoiding accelerated rotation (maintaining a constant angular velocity). A rotating body or system can be in equilibrium if its rate of
rotation is constant and remains unchanged by the forces acting on it. To understand what factors affect rotation, let us think about what happens when you open an ordinary door by rotating it on its
Several familiar factors determine how effective you are in opening the door. See Figure 9.6. First of all, the larger the force, the more effective it is in opening the door—obviously, the harder
you push, the more rapidly the door opens. Also, the point at which you push is crucial. If you apply your force too close to the hinges, the door will open slowly, if at all. Most people have been
embarrassed by making this mistake and bumping up against a door when it did not open as quickly as expected. Finally, the direction in which you push is also important. The most effective direction
is perpendicular to the door—we push in this direction almost instinctively.
The magnitude, direction, and point of application of the force are incorporated into the definition of the physical quantity called torque. Torque is the rotational equivalent of a force. It is a
measure of the effectiveness of a force in changing or accelerating a rotation (changing the angular velocity over a period of time). In equation form, the magnitude of torque is defined to be
$τ = rF sin θ τ = rF sin θ$
where $ττ$ (the Greek letter tau) is the symbol for torque, $rr$ is the distance from the pivot point to the point where the force is applied, $FF$ is the magnitude of the force, and $θθ$ is the
angle between the force and the vector directed from the point of application to the pivot point, as seen in Figure 9.6 and Figure 9.7. An alternative expression for torque is given in terms of the
perpendicular lever arm $r⊥r⊥$ as shown in Figure 9.6 and Figure 9.7, which is defined as
so that
The perpendicular lever arm $r⊥r⊥$ is the shortest distance from the pivot point to the line along which $FF$ acts; it is shown as a dashed line in Figure 9.6 and Figure 9.7. Note that the line
segment that defines the distance $r⊥r⊥$ is perpendicular to $FF$, as its name implies. It is sometimes easier to find or visualize $r⊥r⊥$ than to find both $rr$ and $θθ$. In such cases, it may be
more convenient to use $τ=r⊥Fτ=r⊥F$ rather than $τ=rFsinθτ=rFsinθ$ for torque, but both are equally valid.
The SI unit of torque is newtons times meters, usually written as $N·mN·m$. For example, if you push perpendicular to the door with a force of 40 N at a distance of 0.800 m from the hinges, you exert
a torque of 32 N·m(0.800 m × 40 N × sin 90º) relative to the hinges. If you reduce the force to 20 N, the torque is reduced to $16 N·m16 N·m$, and so on.
The torque is always calculated with reference to some chosen pivot point. For the same applied force, a different choice for the location of the pivot will give you a different value for the torque,
since both $rr$ and $θθ$ depend on the location of the pivot. Any point in any object can be chosen to calculate the torque about that point. The object may not actually pivot about the chosen “pivot
Note that for rotation in a plane, torque has two possible directions. Torque is either clockwise or counterclockwise relative to the chosen pivot point, as illustrated for points B and A,
respectively, in Figure 9.7. If the object can rotate about point A, it will rotate counterclockwise, which means that the torque for the force is shown as counterclockwise relative to A. But if the
object can rotate about point B, it will rotate clockwise, which means the torque for the force shown is clockwise relative to B. Also, the magnitude of the torque is greater when the lever arm is
Now, the second condition necessary to achieve equilibrium is that the net external torque on a system must be zero. An external torque is one that is created by an external force. You can choose the
point around which the torque is calculated. The point can be the physical pivot point of a system or any other point in space—but it must be the same point for all torques. If the second condition
(net external torque on a system is zero) is satisfied for one choice of pivot point, it will also hold true for any other choice of pivot point in or out of the system of interest. (This is true
only in an inertial frame of reference.) The second condition necessary to achieve equilibrium is stated in equation form as
$net τ = 0 net τ = 0$
where net means total. Torques, which are in opposite directions are assigned opposite signs. A common convention is to call counterclockwise (ccw) torques positive and clockwise (cw) torques
When two children balance a seesaw as shown in Figure 9.8, they satisfy the two conditions for equilibrium. Most people have perfect intuition about seesaws, knowing that the lighter child must sit
farther from the pivot and that a heavier child can keep a lighter one off the ground indefinitely.
She Saw Torques On A Seesaw
The two children shown in Figure 9.8 are balanced on a seesaw of negligible mass. (This assumption is made to keep the example simple—more involved examples will follow.) The first child has a mass
of 26.0 kg and sits 1.60 m from the pivot.(a) If the second child has a mass of 32.0 kg, how far is she from the pivot? (b) What is $FpFp$, the supporting force exerted by the pivot?
Both conditions for equilibrium must be satisfied. In part (a), we are asked for a distance; thus, the second condition (regarding torques) must be used, since the first (regarding only forces) has
no distances in it. To apply the second condition for equilibrium, we first identify the system of interest to be the seesaw plus the two children. We take the supporting pivot to be the point about
which the torques are calculated. We then identify all external forces acting on the system.
Solution (a)
The three external forces acting on the system are the weights of the two children and the supporting force of the pivot. Let us examine the torque produced by each. Torque is defined to be
Here $θ=90ºθ=90º$, so that $sinθ=1sinθ=1$ for all three forces. That means $r⊥=rr⊥=r$ for all three. The torques exerted by the three forces are first,
$τ 1 = r 1 w 1 τ 1 = r 1 w 1$
$τ 2 = – r 2 w 2 τ 2 = – r 2 w 2$
and third,
$τp = rpFp = 0⋅Fp = 0. τp = rpFp = 0⋅Fp = 0.$
Note that a minus sign has been inserted into the second equation because this torque is clockwise and is therefore negative by convention. Since $FpFp$ acts directly on the pivot point, the distance
$rprp$ is zero. A force acting on the pivot cannot cause a rotation, just as pushing directly on the hinges of a door will not cause it to rotate. Now, the second condition for equilibrium is that
the sum of the torques on both children is zero. Therefore
Weight is mass times the acceleration due to gravity. Entering $mgmg$ for $w w$, we get
$r2 m2g=r1 m1g. r2 m2g=r1 m1g.$
Solve this for the unknown $r2r2$:
The quantities on the right side of the equation are known; thus, $r2r2$ is
$r2=1.60 m26.0 kg32.0 kg=1.30 m.r2=1.60 m26.0 kg32.0 kg=1.30 m.$
As expected, the heavier child must sit closer to the pivot (1.30 m versus 1.60 m) to balance the seesaw.
Solution (b)
This part asks for a force $FpFp$. The easiest way to find it is to use the first condition for equilibrium, which is
The forces are all vertical, so that we are dealing with a one-dimensional problem along the vertical axis; hence, the condition can be written as
$net F y = 0 net F y = 0$
where we again call the vertical axis the y-axis. Choosing upward to be the positive direction, and using plus and minus signs to indicate the directions of the forces, we see that
This equation yields what might have been guessed at the beginning:
So, the pivot supplies a supporting force equal to the total weight of the system:
Entering known values gives
$Fp = 26.0 kg9.80 m/s2+ 32.0 kg 9.80 m/s2 = 568 N. Fp = 26.0 kg9.80 m/s2+ 32.0 kg 9.80 m/s2 = 568 N.$
The two results make intuitive sense. The heavier child sits closer to the pivot. The pivot supports the weight of the two children. Part (b) can also be solved using the second condition for
equilibrium, since both distances are known, but only if the pivot point is chosen to be somewhere other than the location of the seesaw’s actual pivot!
Several aspects of the preceding example have broad implications. First, the choice of the pivot as the point around which torques are calculated simplified the problem. Since $FpFp$ is exerted on
the pivot point, its lever arm is zero. Hence, the torque exerted by the supporting force $FpFp$ is zero relative to that pivot point. The second condition for equilibrium holds for any choice of
pivot point, and so we choose the pivot point to simplify the solution of the problem.
Second, the acceleration due to gravity canceled in this problem, and we were left with a ratio of masses. This will not always be the case. Always enter the correct forces—do not jump ahead to enter
some ratio of masses.
Third, the weight of each child is distributed over an area of the seesaw, yet we treated the weights as if each force were exerted at a single point. This is not an approximation—the distances
$r1r1$ and $r2r2$ are the distances to points directly below the center of gravity of each child. As we shall see in the next section, the mass and weight of a system can act as if they are located
at a single point.
Finally, note that the concept of torque has an importance beyond static equilibrium. Torque plays the same role in rotational motion that force plays in linear motion. We will examine this in the
next chapter.
Take a piece of modeling clay and put it on a table, then mash a cylinder down into it so that a ruler can balance on the round side of the cylinder while everything remains still. Put a penny 8 cm
away from the pivot. Where would you need to put two pennies to balance? Three pennies?
|
{"url":"https://openstax.org/books/college-physics-ap-courses-2e/pages/9-2-the-second-condition-for-equilibrium","timestamp":"2024-11-13T06:17:48Z","content_type":"text/html","content_length":"623462","record_id":"<urn:uuid:73465870-3eca-43e5-86d7-18db9b214375>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00748.warc.gz"}
|
III Quantum Computation - Some quantum algorithms
3Some quantum algorithms
III Quantum Computation
3.2 Quantum Fourier transform and periodicities
We’ve just seen some nice examples of benefits of quantum algorithms. However,
oracles are rather unnatural problems — it is rare to just have a black-box access
to a function without knowing anything else about the function.
How about more “normal” problems? The issue with trying to compare
quantum and classical algorithms for “normal” problems is that we don’t actually
have any method to find the lower bound for the computation complexity. For
example, while we have not managed to find polynomial prime factorization
algorithms, we cannot prove for sure that there isn’t any classical algorithm that
is polynomial time. However, for the prime factorization problem, we do have a
quantum algorithm that does much better than all known classical algorithms.
This is Shor’s algorithm, which relies on the toolkit of the quantum Fourier
We start by defining the quantum Fourier transform.
(Quantum Fourier transform mod
Suppose we have an
dimensional state space with basis
|0i, |1i, ··· , |N − 1i
labelled by
. The
quantum Fourier transform mod N is defined by
QFT : |ai 7→
The matrix entries are
, ω = e
a, b
= 0
, ··· , N −
1. We write
for the quantum Fourier transform
mod n.
Note that we are civilized and start counting at 0, not 1.
We observe that the matrix
NQFT is
(i) Symmetric
(ii) The first (i.e. 0th) row and column are all 1’s.
Each row and column is a geometric progression 1
, r, r
, ··· , r
, where
r = ω
for the kth row or column.
If we look at
, then we get our good old H. However,
not H ⊗ H.
Proposition. QFT is unitary.
Proof. We use the fact that
1 + r + ··· + r
r 6= 1
N r = 1
So if r = ω
, then we get
1 + r + ··· + r
0 k 6≡ 0 mod N
N k ≡ 0 mod N
Then we have
1 i = j
0 i 6= j
We now use the quantum Fourier Transform to solve the periodicity problem.
Suppose we are given
Z/NZ → Y
(for some set
). We are
promised that f is periodic with some period r | N, so that
f(x + r) = f(x)
for all x. We also assume that f is injective in each period, so that
0 ≤ x
6= x
≤ r −1 implies f(x
) 6= f(x
The problem is to find
, with any constant level of error 1
− ε
independent of
N. Since this is not a decision problem, we can allow ε >
In the classical setting, if
is viewed as an oracle, then
) queries are
necessary and sufficient. We are going to show that quantumly,
log log N
queries with
log N
)) pro ce ssing steps suffice. In later applications, we
will see that the relevant input size is
log N
, not
. So the classical algorithm is
exponential time, while the quantum algorithm is polynomial time.
Why would we care about such a problem? It turns out that later we will
see that we can reduce prime factorization into a periodicity problem. While
we will actually have a very explicit formula for
, there isn’t much we can do
with it, and treating it as a black box and using a slight modification of what
we have here will be much more efficient than any known classical algorithm.
The quantum algorithm is given as follows:
. For example, if
= 2
, then we can make this using
⊗ ··· ⊗
H. If
is not a power of 2, it is not immediately obvious how
we can make this state, but we will discuss this problem later.
(ii) We make one query to get
|fi =
We now recall that
r | N
, Write
, so that
is the number of
periods. We measure the second register, and we will see some
We let
be the least
) =
, i.e. it is in the first period. Note
that we don’t know what x
is. We just know what y is.
By periodicity, we know there are exactly
values of
such that
) =
, x
+ r, x
+ 2r, ··· , x
+ (A − 1)r.
By the Born rule, the first register is collapsed to
|peri =
+ jri
We throw the second register away. Note that
is chosen randomly from
the first period 0, 1, ··· , r − 1 with equal probability.
What do we do next? If we measure
, we obtain a random
-value, so
what we actually get is a random element (
th) of a random period (
namely a uniformly chosen random number in 0
, ··· , N
. This is not too
The solution is the use the quantum Fourier transform, which is not sur-
prising, since Fourier transforms are classically used to extract periodicity
Apply QFT
to |peri now gives
|peri =
We now see the inner sum is a geometric series. If
= 1, then this sum
is just A. Otherwise, we have
1 − ω
1 − ω
1 − 1
1 − ω
= 0.
So we are left with
|peri =
Note that before the Fourier transform, the random shift of
lied in the
+ jri
. After the Fourier transform, it is now encoded in the phase
Now we can measure the label, and we will get some
which is a multiple
, where 0
≤ k
≤ r −
1 is chosen uniformly at random. We rewrite
this equation as
We know
, because we just measured it, and
is a given in the question.
is randomly chosen, and
is what we want. So how do we extract
that out?
If by good chance, we have
coprime to
, then we can cancel
lowest terms and read off
as the resulting denominator
. Note that
to lowest terms can be done quickly by Euclidean algorithm.
But how likely are we to be so lucky? We can just find some number theory
book, and figure out that the number of natural numbers
< r
that are
coprime to
grows as
r/ log log r
). More precisely, it is
∼ e
r/ log log r
where γ is the other Euler’s constant. We note that
log log r
> O
log log N
So if
is chosen uniformly and randomly, the probability that
coprime to r is at least O(1/ log log N).
Note that if
is not coprime with
, then we have
˜r | r
, and in particular
˜r < r
. So we can check if
is a true period — we compute
(0) and
and see if they are the same. If
is wrong, then they cannot be equal as
is injective in the period.
While the probability of getting a right answer decreases as
N → ∞
, we just
have to do the experiment many times. From elementary probability, if an
event has some (small) success probability
, then given any 0
−ε <
log ε
trials, the probability that there is at least one success is
− ε
. So if we repeat the quantum algorithm
log log N
) times, and
each time, then we can get a true
with any constant level of
We can further improve this process — if we have obtained two attempts
˜r, ˜r
, then we know
is at least their least common multiple. So we can
in fact achieve this in constant time, if we do a bit more number theory.
However, the other parts of the algorithm (e.g. cancelling
down to
lowest terms) still use time polynomial in
log N
. So we have a polynomial
time algorithm.
There is one thing we swept under the carpet. We need to find an efficient
way of computing the quantum Fourier transform, or else we just hid all our
complexity in the quantum Fourier transform.
In general, we would expect that a general unitary operations on
) elementary circuits. However, the quantum Fourier transform is
Fact. QFT
can be implemented by a quantum circuit of size O(n
The idea of the construction is to mimic the classical fast Fourier transform.
An important ingredient of it is:
Fact. The state
|xi =
is in fact a product state.
We will not go into the details of implementation.
We can generalize the periodicity problem to arbitrary groups, known as the
hidden subgroup problem. We are given some oracle for
G → Y
, and we are
promised that there is a subgroup
H < G
such that
is constant and distinct on
cosets of
. We want to find
(we can make “find” more precise in two
ways — we can either ask for a set of generators, or provide a way of sampling
uniformly from H).
In our case, we had G = (Z/NZ, +), and our subgroup was
H = {0, r, 2r, ··· , (A − 1)r}.
Unfortunately, we do not know how to do this efficiently for a group in general.
|
{"url":"https://dec41.user.srcf.net/h/III_M/quantum_computation/3_2","timestamp":"2024-11-11T09:54:51Z","content_type":"text/html","content_length":"241982","record_id":"<urn:uuid:59f56379-9fbe-4cb3-a435-4a31af1bc29a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00525.warc.gz"}
|
A template approach to producing incremental objective cost functions for local search meta-heuristics
Meta-heuristic search techniques based on local search operators have proven to be very effective at solving combinatorial optimisation problems. A characteristic of local search operators is that
they usually only make a small change to the solution state when applied. As a result, it is often unnecessary to re-evaluate the entire objective function once a transition is made but to use an
incremental cost function. For example in the travelling salesman problem, the position of two cities within a tour, may be interchanged. Using an incremental cost function, this equates to an O(1)
operation as opposed to an O(n) operation (where n is the number of cities). In this paper, a new approach based on the use of templates is developed for the generic linked list modelling system [4].
It demonstrates that incremental objective cost functions can be automatically generated for given problems using different local search operators.
Dive into the research topics of 'A template approach to producing incremental objective cost functions for local search meta-heuristics'. Together they form a unique fingerprint.
|
{"url":"https://research.bond.edu.au/en/publications/a-template-approach-to-producing-incremental-objective-cost-funct","timestamp":"2024-11-14T05:09:56Z","content_type":"text/html","content_length":"54346","record_id":"<urn:uuid:b809289a-5c2d-4579-b354-0ef0b9fdf4d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00230.warc.gz"}
|
The Classical Formulation of Inverse Problems
The Classical Formulation of Inverse Problems#
Definition of inverse problems#
Suppose that you have scientific model that predicts a quantity of interest. Let’s assume that this model has parameters that you do not know. These parameters could be simple scalars (mass, spring
constant, dumping coefficients, etc.) or it could be also be functions (initial conditions, boundary values, spatially distributed constitutive relations, etc.) In the case of the latter, we assume
that you have already reduced the dimensionality of the parameterization with, for example, the Karhunen-Lo`eve expansion. Let’s denote all these parameters with the vector \(x\). Assume that:
\[ x\in X \subset\mathbb{R}^d. \]
Now, let’s say we perform an experiment that measures a noisy vector:
\[ y\in Y\subset \mathbb{R}^m. \]
Assume that, you can use your model model to predict \(y\). It does not matter how complicated your model is. It could be a system of ordinary differential or partial differential equations, or
something more complicated. If it predicts \(y\), you can always think of it as a function from the unknown parameter space \(X\) to the space of \(y\)’s, \(Y\). That is, you can think of it as
giving rise to a function:
\[ f : X \rightarrow Y. \]
The inverse problem, otherwise known as the model calibration problem is to find the best \(x\in X\) so that:
\[ f(x) \approx y. \]
Formulation of inverse problems as optimization problems#
Saying that \(f(x)\approx y\) is not an exact mathematical statement. What does it really mean for \(f(x)\) to be close to \(y\)? To quantify this, let us introduce a loss metric:
\[ \ell: Y \times Y \rightarrow \mathbb{R}. \]
such that \(\ell(f(x),y)\) is how much our prediction is off if we chose the input \(x\). Equiped with this loss metric, we can formulate the mathematical problem as:
\[ \min_{x\in X} \ell(f(x),y). \]
The Square Loss#
The choice of the loss metric is somewhat subjective. However, a very common assumption is that to take the square loss:
\[ \ell(f(x), y) = \parallel f(x) - y\parallel_2^2 = \sum_{i=1}^m\left(f_i(x)-y_i\right)^2. \]
For this case, the inverse problem can be formulated as:
\[ \min_{x\in X}\parallel f(x) - y\parallel_2^2. \]
Solution methodologies#
We basically have to solve an optimization problem. For the square loss function, if \(f(x)\) is linear, then you get the classic least squares problem which has a known solution. Otherwise, you get
what is known as generalized least squares. Let’s discuss two possibilities for the most general case:
Case 1: Good for ODEs and simple PDEs#
• Implement your model from scratch in a differential programming framework like JAX.
• Use automatic differentiation to compute the gradient of the loss function.
• Use a gradient-based optimization algorithm like L-BFGS-B to solve the optimization problem.
Case 2: Good for legacy codes#
• Build a computationally inexpensive surrogate model for \(f(x)\).
• Make sure the surrogate modeling is done in a differentiable programming framework.
• Use automatic differentiation to compute the gradient of the loss function.
• Use a gradient-based optimization algorithm like L-BFGS-B to solve the optimization problem.
|
{"url":"https://predictivesciencelab.github.io/advanced-scientific-machine-learning/inverse/basics/01_classic_formulation.html","timestamp":"2024-11-13T02:32:12Z","content_type":"text/html","content_length":"42454","record_id":"<urn:uuid:ba38f257-366f-4b17-92f3-59d7258fb609>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00877.warc.gz"}
|
Convex Hull
Saturday, July 28, 2018
convex hull
The possibilities afforded by art depend upon how and what is interpreted by the viewer, and may have absolutely no connection to what its creator intended. Most of the times, we don’t even know the
creator. While we may glean a bit about the origin of the artwork, for the most part, we have no idea what the artist was thinking before and during the process of its creation. The artist created
the work and moved on, or not. But it doesn’t matter… for, as the work’s viewer, we can keep on going back to it, and as long as we are curious about it, we can keep on discovering new insights in
it, new subtleties that matter only to us. And thereby come into existence the subsequent lives of that work, on each interpretation by each individual viewer.
Two strangers meet. They’ve had different trajectories but have arrived at the same point. And yet, they are connected by history, and by the baggage of language. They talk in a language that a third
person can understand. Their words have weight, and that weight gives them form. The form shifts and changes shape, but is palpable. The language of language is infinitely mutable. Starting with only
a couple of dozen (or a few more) letters, we can construct zillions of words that can be strung together to convey gajillions of thoughts. A convex hull is the tighest enclosure for all the ideas
contained in a work. And, while it has a boundary, much like the circle whose center is everywhere, it is an inadquate container of ideas. And therein lies its beauty, for with each, even the
slightest of rearrangement, a new interpretation is possible. Such is the possibility of art. What the artist does says about her. What I see in her work says about me.
Using Andrey Naumenko’s high performance JavaScript 2D Convex Hull program
function convexHull(points) {
function removeMiddle(a, b, c) {
var cross = (a.x - b.x) * (c.y - b.y) - (a.y - b.y) * (c.x - b.x);
var dot = (a.x - b.x) * (c.x - b.x) + (a.y - b.y) * (c.y - b.y);
return cross < 0 || cross == 0 && dot <= 0;
points.sort(function (a, b) {
return a.x != b.x ? a.x - b.x : a.y - b.y;
var n = points.length;
var hull = [];
for (var i = 0; i < 2 * n; i++) {
var j = i < n ? i : 2 * n - 1 - i;
while (hull.length >= 2 && removeMiddle(hull[hull.length - 2], hull[hull.length - 1], points[j]))
return hull;
All we need is a set of words, points in space and time, and the curiosity afforded by convexHull(points) to explore the world of ideas.
1. The animated convex hull based on an image from the Evacuation of Nada by Ira Hadžić. ↩
|
{"url":"https://punkish.org/Convex-Hull/","timestamp":"2024-11-11T10:49:26Z","content_type":"text/html","content_length":"8174","record_id":"<urn:uuid:5018e81d-e06f-458d-8c48-842afb29fd0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00009.warc.gz"}
|
Igor Krylov - Algebraic Geometer
March, 2016, New-York, USA
Title: Classification and birational rigidity of del Pezzo fibrations with an action of the Klein simple group
Abstract: Study of embeddings of a finite group G into the Cremona group is equivalent to study of G-birational geometry of rational G-Mori fiber spaces. A good place to start studying finite
subgroups in Cremona group is a study of simple subgroups. I prove that any del Pezzo fibration over projective line with an action of the Klein simple group is either a direct product or a certain
singular del Pezzo fibration X[n] of degree 2. It is known that del Pezzo fibrations of degree 2 satisfying the K^2-condition are birationally superrigid. I extend this result to singular del Pezzo
fibrations and prove that X[n] are superrigid, in particular not rational, for n>2.
Edinburgh Geometry seminar
March, 2016, Edinburgh, UK
Title: Classification and birational rigidity of del Pezzo fibrations with an action of the Klein simple group
Abstract: Cremona group of rank b is the group of birational transformations of the projective n-space. One was to study Cremona group is to study its finite subgroups. This problem can be translated
to the geometric language: instead of subgroups of Cremona group isomorphic to a group G we can study rational G-Mori fiber spaces. This idea works particularly well for simple subgroups of Cremona
group. I prove that any del Pezzo fibration over projective line with an action of the Klein simple group is either a direct product or a certain singular del Pezzo fibration X[n] of degree 2. It is
known that del Pezzo fibrations of degree 2 satisfying the K^2-condition are birationally superrigid. I extend this result to singular del Pezzo fibrations and prove that X[n] are superrigid, in
particular not rational, for n>2.
Algebraic geometry seminar in Higher School of Economics
February 2016, Moscow, Russia
Title: Rationally conneted non Fano type vareties
Abstract: The class of varieties of Fano type is a singular generalization of Fano varieties which is very well behaved under the MMP, the Cox ring of varietie of Fano type is finitely generated. It
is known that all varieties of Fano type are rationally connected. The convers is true in a sense in dimension 2. I will give counterexamples to the converse statement in dimension 3 and higher. I
will use the techniques of birational rigidity and the MMP.
5th Swiss-French workshop on algebraic geometry
January 2016, Charmey, Switzerland
Title: Classification and birational rigidity of del Pezzo fibrations with an action of the Klein simple group
Abstract: Study of embeddings of a finite group G into the Cremona group is equivalent to study of G-birational geometry of rational G-Mori fiber spaces. A good place to start study finite subgroups
is a study of simple subgroup. We prove that any del Pezzo fibration over projective line with an action of the Klein simple group is either a direct product or a certain singular del Pezzo fibration
X[n] of degree 2. It is known that del Pezzo fibrations of degree 2 satisfying the K^2-condition are birationally superrigid. I extend this result to singular del Pezzo fibrations and prove that X[n]
are superrigid, in particular not rational, for n>2.
University of Liverpool Geometry seminar
December, 2015, Liverpool, UK
Title: Rationally connected non Fano type varieties
Abstract: The class of varieties of Fano type is a generalization of Fano varieties which is very well behaved under the MMP. It is known that all varieties of Fano type are rationally connected. The
converse is true in a sense in dimension 2. I will give counterexamples in dimension 3 and higher using the technique of singularities of linear systems which is typically used for proving birational
Edinburgh Geometry seminar
October, 2015, Edinburgh, UK
Title: Rationally connected non Fano type varieties
Abstract: The class of varieties of Fano type is a generalization of Fano varieties which is very well behaved under the MMP. It is known that all varieties of Fano type are rationally connected. The
converse is true in a sense in dimension 2. I will give counterexamples in dimension 3 and higher using the technique of singularities of linear systems which is typically used for proving birational
Conference on Finite subgroups of Cremona group
August, 2015, Trento, Italy
Title: Rationally connected non Fano type varieties
Abstract: The class of varieties of Fano type is a generalization of Fano varieties which is very well behaved under the MMP. It is known that all varieties of Fano type are rationally connected. The
converse is true in a sense in dimension 2. I will give counterexamples in dimension 3 and higher using the technique of singularities of linear systems which is typically used for proving birational
Algebraic geometry seminar in IBS-CGP
June, 2015, Pohang, Korea
Title: Rationally connected non Fano type varieties
Abstract: The class of varieties of Fano type is a generalization of Fano varieties which is very well behaved under the MMP. It is known that all varieties of Fano type are rationally connected. The
converse is true in a sense in dimension 2. I will give counterexamples in dimension 3 and higher using the technique of singularities of linear systems which is typically used for proving birational
EDGE days
June, 2015, Edinburgh, UK
Title: Rationally connected non Fano type varieties
Abstract: The class of varieties of Fano type is a generalization of Fano varieties which is very well behaved under the MMP. It is known that all varieties of Fano type are rationally connected. The
converse is true in a sense in dimension 2. I will give counterexamples in dimension 3 and higher using the technique of singularities of linear systems which is typically used for proving birational
|
{"url":"http://krylov.su/latest-activity/page/5/","timestamp":"2024-11-10T21:47:59Z","content_type":"text/html","content_length":"18976","record_id":"<urn:uuid:ff20de44-da88-4ac6-ac75-88fbc0f99eb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00479.warc.gz"}
|