arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
# Techniques for Proving a Ring is Artinian? So I have been posed the problem of showing that $\begin{bmatrix} \mathbb{R} & \mathbb{R} \\ 0 & \mathbb{Q} \end{bmatrix}$ is left Artinian. Now, when showing rings are Noetherian, I usually show that every ideal is finitely generated. When showing rings are not Noetherian/Artinian I construct ascending/descending chains of ideals that do not stabilize. My issue really is that I have no idea how to go about proving that rings are Artinian (modulo obvious cases like fields). So, although a hint about this particular matrix ring would be nice, I am really asking the more general question: What are the canonical techniques for proving that a ring is left/right Artinian? • Try to classify the ideals. In this case the image of any subspace under the map $\mathbb R^2\rightarrow\begin{bmatrix}\mathbb R&\mathbb R\\0&\mathbb Q\end{bmatrix}$ is an ideal, and there are only 2 other ideals. – stewbasic Sep 29 '16 at 23:36 • – Watson Dec 24 '16 at 13:07 ## For this specific example There's a method to find all one-sided ideals of a ring like this which I've described in this post and its links. It's a very useful thing to have under your belt. Really it would probably be best to accomplish this problem using this method, or an ad-hoc hybrid with your own observations. ## In general Of course, there is no magic bullet for all problems. Aside from verifying the DCC on left ideals, there are a few other characterizations like these: 1. $R$ has a generator $_RG$ which is Artinian. 2. Every f.g. left $R$ module is Artinian 3. Every f.cog. left $R$ module is Artinian But they do not seem to be easier than verifying the chain condition directly. There are more sophisticated combinations of theorems too, if you think you are justified in assuming them. For example, Hopkins-Levitzki says that if you can prove the ring is left Noetherian, and you compute its radical and find it to be a nilpotent ideal, then it is also left Artinian. For commutative rings, there is also another handy characterization: Artinian iff Noetherian and all primes ideals are maximal.
# Sequences Plugin¶ ## What the plugin does¶ The Sequences plugin provides transactional sequences for GraphDB. A sequence is a long counter that can be atomically incremented in a transaction to provide incremental IDs. ## Usage¶ The plugin supports multiple concurrent sequences where each sequence is identified by an IRI chosen by the user. ### Creating a sequence¶ Choose an IRI for your sequence, for example http://example.com/my/seq1. Insert the following triple to create a sequence whose next value will be 1: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> INSERT DATA { my:seq1 seq:create [] } You can also create a sequence by providing the starting value, for example to create a sequence whose next value will be 10: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> INSERT DATA { my:seq1 seq:create 10 } When using the GraphDB cluster, you might get the following exception if the repository existed before registering the plugin: Update would affect a disabled plugin: sequences. You can activate the plugin with: INSERT DATA { [] <http://www.ontotext.com/owlim/system#startplugin> "sequences".} ### Using a sequence¶ #### Processing sequence values on the client¶ In this scenario, new and current sequence values can be retrieved on the client where they can be used to generate new data that can be added to GraphDB in the same transaction. For a workaround in the cluster, see here. Note Using the below examples will not work inside the GraphDB Workbench as they need to be executed in one single transaction, and if run one by one, they would be performed in separate transactions. See here how to execute them in one transaction. To use any sequence, you must first start a transaction and then prepare the sequences for use by executing the following update: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> INSERT DATA { [] seq:prepare [] } Then you can request new values from any sequence by running a query like this (for the sequence http://example.com/my/seq1): PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> SELECT ?next { my:seq1 seq:nextValue ?next } To query the last new value without incrementing the counter, you can use a query like this: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> SELECT ?current { my:seq1 seq:currentValue ?current } Use the obtained values to construct IRIs, assign IDs, or any other use case. #### Using sequence values only on the server¶ In this scenario, new and current sequence values are available only within the execution context of a SPARQL INSERT update. New data using the sequence values can be generated by the same INSERT and added to GraphDB. The following example prepares the sequences for use and inserts some new data using the sequence http://example.com/my/seq1 where the subject of the newly inserted data is created from a value obtained from the sequence. The example will work both in: • the GraphDB cluster – as new sequence values do not need to be exposed to the client. • the GraphDB Workbench – as it performs everything in a single transaction by separating individual operations using a semicolon. PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> # Prepares sequences for use INSERT DATA { [] seq:prepare [] }; # Obtains a new value from the sequence and creates an IRI based on it, # then inserts new triples using that IRI INSERT { ?subject rdfs:label "This is my new document" ; a my:Type1 } WHERE { my:seq1 seq:nextValue ?next BIND(IRI(CONCAT("http://example.com/my-data/test/", STR(?next))) as ?subject) }; # Retrieves the last obtained value, recreates the same IRI, # and adds more data using the same IRI INSERT { ?subject rdfs:comment ?comment ; } WHERE { my:seq1 seq:currentValue ?current BIND(IRI(CONCAT("http://example.com/my-data/test/", STR(?current))) as ?subject) BIND(CONCAT("The document ID is ", STR(?current)) as ?comment) } After that, commit the transaction. ### Dropping a sequence¶ Dropping a sequence is similar to creating it. For example, to drop the sequence http://example.com/my/seq1, execute this: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> INSERT DATA { my:seq1 seq:drop [] } ### Resetting a sequence¶ In some cases, you might want to reset an existing sequence such that its next value will be a different number. Resetting is equivalent to dropping and recreating the sequence. To reset a sequence such that its next value will be 1, execute this update: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> INSERT DATA { my:seq1 seq:reset [] } You can also reset a sequence by providing the starting value. For example, to reset a sequence such that its next value will be 10, execute: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> INSERT DATA { my:seq1 seq:reset 10 } ### Workaround for using sequence values on the client with the cluster¶ If you need to process your sequence values on the client in a GraphDB 9.x cluster environment, you can create a single-node (i.e., not part of a cluster) worker repository to provide the sequences. It is most convenient to have that repository on the same GraphDB instance as your primary master repository. Let’s call the master repository where you will store your data master1 and the second worker repository where you will create and use your sequences seqrepo1. #### Managing sequences¶ Execute all create, drop, and reset statements in seqrepo1. The examples below assume that you have created a sequence http://example.com/my/seq1. #### Using sequences on the client¶ 1. First, you need to obtain one or more new sequence values from the repository seqrepo1: 1. Start a transaction in seqrepo1. 2. Prepare the sequences for use by executing this in the same transaction: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> INSERT DATA { [] seq:prepare [] } 3. Obtain one or more new sequence values from the sequence http://example.com/my/seq1: PREFIX seq: <http://www.ontotext.com/plugins/sequences#> PREFIX my: <http://example.com/my/> SELECT ?next { my:seq1 seq:nextValue ?next } 4. Commit the transaction in seqrepo1. 2. Then you can process the obtained values on the client, generate new data, and insert it into the master repository master1: 1. Start a transaction in master1. 2. Insert data using the obtained sequence values. 3. Commit the transaction in master1. #### Handling backups¶ To always ensure data consistency with backups, follow this order: • Backup 1. Backup the master repository master1 first. 2. Backup the sequence repository seqrepo1 second. • Restore 1. Restore the sequence repository seqrepo1 first. 2. Restore the master repository master1 second. An alternative would be to not back up the seqrepo1 repository but simply recreate the repository and the sequence (or reset the sequence) with the next potential sequence value from the master1 repository. Here is a sample query that retrieves the next potential value (which is equal to the last used value + 1): PREFIX ent: <http://www.ontotext.com/owlim/entity#> PREFIX my: <http://example.com/my/> PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> SELECT ?nextValue WHERE { ?type a my:Type1; ent:id ?id . BIND(xsd:int(REPLACE(STR(?type), "http://example.com/my-data/test/", "")) + 1 as ?nextValue) } ORDER BY DESC(?id) LIMIT 1 Note that this example assumes that sequence values were used to generate IRIs, and IRIs with higher values were used for the first time after IRIs with lower values were used.
# Lie's third theorem via differential graded algebras? Dennis Sullivan, "Infinitesimal computations in topology", Publ. IHES: At the end of section 8, he writes, among other things, roughly the following. Let $\mathfrak{g}$ be a (finite-dimensional, real) Lie algebra and let $\Lambda \mathfrak{g}^{\ast}$ be the Chevalley-Eilenberg complex (i.e. the exterior algebra, with the differential dual to the Lie bracket). He considers the spatial realization $\langle \Lambda \mathfrak{g}^{\ast}\rangle$ with respect to the simplicial d.g.a. of $C^{\infty}$-forms on the standard simplices. This is a simplicial set, the $p$-simplices are the d.g.a.-homomorphisms $\Lambda \mathfrak{g}^{\ast} \to \mathcal{A}(\Delta^p)$ to the de Rham forms on the simplex. Let $G$ be the simply-connected Lie group with Lie algebra $\mathfrak{g}$. "Theorem" (8.1)' (the quotation marks are due to Sullivan) says that the fundamental group of $\langle \Lambda \mathfrak{g}^{\ast} \rangle$ is isomorphic to $G$. Now the set of $p$-simplices of $\langle \Lambda \mathfrak{g}^{\ast} \rangle$ has a topology, induced from the $C^{\infty}$-topology) on the space of $\mathfrak{g}$-valued forms on the simplices and so $\pi_1 (\langle \Lambda \mathfrak{g}^{\ast} \rangle )$ is a topological group. Sullivan also says that the above isomorphism is a homeomorphism. It is not difficult to verify these assertions. Since it is rather clear how to describe the exponential map in this construction, we can also recover the differentiable structure on $G$ from this method. The upshot of this discussion is: once the existence of the simply-connected Lie group is known (this is Lie's third theorem, proven only decades after Lie by Cartan), it has the given abstract description. It is well-known that Lie's third theorem is a pretty hard result (the standard proof goes via Ado's theorem). It seems possible to reverse the logic of that argument and give a proof of Lie's 3rd theorem. Given $\mathfrak{g}$, \emph{define} $G:=\pi_1 (\langle \Lambda \mathfrak{g}^{\ast} \rangle)$ as a topological group. The exponential map $\mathfrak{g} \to G$ is given by the following formula: any $x \in \mathfrak{g}$ defines a constant $1$-form on $\Delta^1$, hence a 1-simplex of $\pi_1 (\langle \Lambda \mathfrak{g}^{\ast} \rangle)$. That was the easy part; here are the nontrivial parts: 1. Show that $G$ is Hausdorff (probably difficult) 2. Put a smooth stucture on it, such that the exponential map is a local chart (probably the hardest part) 3. Once this is done, the simple connectivity of $G$ and the fact that $Lie (G)=\mathfrak{g}$ are probably both obvious. After all these preliminaries, I can ask my question: has this approach been written down properly? I am aware that a generalization of this argument has been used by Getzler http://arxiv.org/abs/math/0404003 and Henriques http://arxiv.org/abs/math/0603563, but in these papers, I do not find the details. It is of course also possible (maybe desirable) to banish all the fancy language from the discussion, leaving a definition of $G$ as the quotient of the space of $\mathfrak{g}$-valued 1-forms on the interval. - add comment ## 1 Answer The details are here: Marius Crainic, Rui Fernandes, Integrability of Lie brackets http://arxiv.org/abs/math/0105033 (To connect this to your question, notice that a morphism $T X \to \mathfrak{g}$ of Lie algebroids, which is the language they use, is dually the same as a morphism $\Omega^\bullet(X) \leftarrow CE(\mathfrak{g})$ of dg-algebras. For more see here) - Dear Urs, thank you for the answer. Crainic and Fernandes refer to the textbook by Duistermaat and Kolk, which contains a proof of Lie III precisely along the the lines I sketched! Isn't it curious that Duistermaat and Kolk's proof uses the statement that $H^2 (G;R)=0$ for any simply-connected Lie group (the universal cover of the adjoint)? This cohomlogical fact is apparently important to prove Hausdorff-ness. The proof of Ado's theorem that I know is based on the closely related 2nd Whitehead Lemma ($H^2 (\mathfrak{g})=0$ for semisimple). –  Johannes Ebert Oct 23 '10 at 15:47 add comment
Mathematical and Physical Journal for High Schools Issued by the MATFUND Foundation Already signed up? New to KöMaL? # Problem B. 4711. (April 2015) B. 4711. Let $\displaystyle f(x)=\frac{4^x}{4^x+2}$. Calculate the value of the sum $\displaystyle f(0/2015)+ f(1/2015)+f(2/2015)+\dots +f(2014/2015)+f(2015/2015)$. (5 pont) Deadline expired on May 11, 2015. ### Statistics: 70 students sent a solution. 5 points: 60 students. 4 points: 5 students. 3 points: 3 students. Unfair, not evaluated: 2 solutions. Problems in Mathematics of KöMaL, April 2015
Question: DESeq2 question about design formula with nested variables 0 8 days ago by Bash0 Bash0 wrote: Hi, I am currently working on an experiment and have some trouble with the design. I've looked on the bioconductor help pages and the manual, but I have the feeling that I am overlooking something simple. Maybe you could help with some insights, or point me to a post with a similar setup. My setup is as follows: genotype timepoint flexible GT1 T1 flex GT3 T1 flex GT2 T1 non_flex GT4 T1 non_flex GT5 T1 non_flex GT1 T2 flex GT3 T2 flex GT2 T2 non_flex GT4 T2 non_flex GT5 T2 non_flex GT1 T3 flex GT3 T3 flex GT2 T3 non_flex GT4 T3 non_flex GT5 T3 non_flex I have 5 different genotypes (GT1..GT5) which are genotypically different. Each of these genotypes have been put in the cooler at T2 and after some time RNA was extracted. Later, the samples were put out cooler and after some time RNA was extracted (T3). Every genotype is either flexible or non-flexible (i.e. flexible means in this case they deal better with cold). For each of the sample/timepoint I have three biological replicates (omitted from the scheme above). With this experiment I want to know which genes change over the different time points and what the difference is between genotypes while taking in account the flexible trait. What I could do is making a design like this (which follows the example of the time series experiment found in the manual): ~ genotype + timepoint + genotype:timepoint In which the genotype is modelled with the interaction over time. However in this case I don't take in account the flexible trait. With a result-set like that, I could easily calculate the overlap between between different interaction terms manually (i.e. (GT1T2/GT1T1) vs (GT2T2 /GT2T1)), but this doesn't seem the best way to go. What I also could do is is replacing genotype for flexible in the formula, which would make the design simpler but it will not take in account the differences between genotype, which is something I don't want. Do you have any tips to solve this? Best regards, Bas deseq2 • 54 views modified 8 days ago by Michael Love24k • written 8 days ago by Bash0 0 8 days ago by Michael Love24k United States Michael Love24k wrote: If you think the genotype differences are important at baseline but the interaction effect is only different between flex and unflex then you should be able to adapt the technique outlined in the vignette about controlling for individual baselines while testing for group differences.
# Notes ## Skiena Chapter 3 ### Basics of priority queues Priority queues are queues in which the elements are kept in a sorted order. Scheduling jobs of relative importance is example where items should be maintained in the queue in order of importance. Priority queues provide more flexibility than simple searching, because they allow for new elements to enter the system at arbitrary intervals. The cost-effectiveness of inserting a new job into a queue, instead of constantly sorting and resorting and resorting again. process of dating - example of application • as information is processed, removing, re-evaluating, reinserting samples into a queue ### Priority queue interface Priority queue implementation details: • insert(q,k) - given an item x with key k, insert into priority queue Q • find-min(q) or find-max(q) - return a pointer to the item with the smallest/largest key value of all elements in priority queue • delete-min(q) or delete-max(q) - remove item from priority queue whose key is min/max ### Implementations Priority queue complexity of operations, worst-case scenario? Using an unsorted array: Using a sorted array: Using a balanced binary search tree: ## Goodrich Chapter 9 ### priorities and priority queues Example: air-traffic control center deciding which flights to clear for landing among many approaching the airport Airport - example applications of stacks and queues A priority queue is a collection of prioritized elements that allows arbitrary element insertion, and the removal of the element with the highest priority. Most common for priorities to be expressed numerically, but any object can be used a a key so long as it supports comparison a < b Mapping: keys to integers or real numbers ### Priority queue ADT The abstract data type supports removal of one item, the item with the highest priority, so this is essentially a queue with a modified remove method: • P.add(k, v) - add an item with key k and value v into priority queue P • P.min() - peek minimum item - return an item (k,v) representing the key and value of item in priority queue P with minimum key, but do not remove the item. • P.remove_min() - remove minimum item - remove and return an item (k,v) that represents the key and value of minimum key in priority queue P ### Implementation nodes Several ways to concretely implement a priority queue. Two main ways: • sorted • unsorted OOP principles also apply to our implementation: • composition design pattern • composition - one object consisting of multiple parts • In this case, key and value • Other examples of composition: x,y point and nearest neighboring point, String item and count of occurrences ### Unsorted implementation Unsorted implementation: fast add operation, slow min/remove min operation For an unsorted array, can add anywhere so adding is O(1). Finding the minimum is an O(N) operation. Unsorted Priority Queue class: extends Priority Queue class Unsorted priority queue keeps track of things using a Positional List, which implements Doubly Linked List, which implements Linked List ### Sorted implementation Sorted has a slow add method and a fast minimum/remove minimum method. For a sorted array, add method needs to find the appropriate place to add the item in the sorted list, but it always knows where the minimum item is. The cost of inserting an item is worst-case O(N), since we have to at least traverse the array to reach the given node, and in the worst case that requires traversing N nodes. If we can randomly access nodes, however, we can find the minimum in O(log N) operations using a binary search. ### Heap The heap is a data structure offering some of the advantages of both a sorted and an unsorted array. (Trees have the advantage of sorting information hierarchically). In this case, the heap uses the tree level to sort information by value. Heap-Order Property: In a heap T, for every position p other than the root, the key stored at p is greater than or equal to the key stored at p's parent. As a consequence, a minimum key is always stored at the root of T. The min or remove min method therefore has an easy time. The heap must also satisfy an additional constraint: the heap must be complete. Complete Binary Tree Property: A heap T with height h is a complete binary tree if levels 0, 1, 2, ..., h-1 of T have the maximum number of nodes possible (namely, level i has 2^i nodes, for 0 <= i <= (h-1)) and the remaining nodes at level h reside in the leftmost positions possible. Basically, we fill in the levels, one level at a time, never skipping from one level to the next. This guarantees that the left and right subtrees of the root node cannot differ by more than 1 level. Proof: Heap height: a heap T with n entries has a height of floor(log(n)). This relies on the fact that IF the tree T is complete, there can only be one incomplete level at a time, so levels 0 through h-1 of the tree T must be complete. The number of nodes this represents is ${\displaystyle n=1+2+4+...+2^{h-1}=2^{h}-1}$ the number of nodes in level h is somewhere between 1 and ${\displaystyle 2^{h}}$, so this means ${\displaystyle n\geq 2^{h}-1+1=2^{h}}$ and ${\displaystyle n\leq 2^{h}-1+2^{h}=2^{h+1}-1}$ Taking the log of both sides of these inequallities, we see: ${\displaystyle h\leq \log {(n)}}$ ${\displaystyle \log {(n+1)}-1\leq h}$ These two, plus fact that h is integer, imply h = floor( log( n ) ) To add to a heap requires two actions: • Add a new node to the tree • Enforce the properties of a heap (arrangement by value) The heap add action requires creating a new heap node to store the key value pair (k,v). The compete binary property requires us to place the new node at the next available position on level h, or if the level h is filled, to place it in the left-most position of the next level. The post-insert action requires us to enforce the properties of a heap through a process called up-bubbling. If we have not inserted into an empty tree, we should compare the key of our new position p to p's parent, which we can denote q. If key p >= key q, heap-order property satisfied, algorithm terminates. If key p < key q, we need to restore heap-order property. We swap p and q. Heap-order property is now satisfied for position p, but needs to be satisfied for position q. Keep applying the routine as you move up the tree until you stop. The worst case is when a new node has been added that will become a new minimum for the heap: up-heap bubbling causing the new node to propagate all the way from its initial position, as a leaf node in the last level, to the root node. The worst-case cost is log(N). ### Removing: Heap remove min and post-remove The main method of the priority queue is a modification of the remove method of the queue ADT, the removeMin method. This removes the item at the root of the tree T. Once the root item is removed, however, the tree needs to be reconnected, which means removing is not as simple as an O(1) operation. We need to ensure that the tree stays a complete binary tree. To do that, we copy the very last item in the tree into the root node, and ensure (in reverse order this time, from the root working toward the leaf nodes) that the heap-order property is satisfied for each position. The post-removal algorithm to ensure heap-ordering is maintained is called down-heap bubbling. There are two cases for checking position p as we move down the tree and validate the values in the tree: • Position p has no right child, so let c be th left child of p. • Otherwise, p has both children, so let c be the child of p with minimal key. If key of p <= key of c, heap order property is satisfied, algorithm terminates. If key of p > key of c, we need to restore heap-order property, by swapping p and c. When p has two children, we must consider the smaller of the two children. Now that heap order is enforced for position p, we can move on to check position c. Continue swapping down T until no violation of the proeprty occurs. This is called down-heap bubbling.
# Neutrinos ## T2K scientific results in 2018 T2K acquired more data in 2018 in neutrino and anti-neutrino modes reaching the milestone of $3\times10^{21}$ protons on target (POT). The increased statistics helped to achieve an even stronger confirmation on CP violation, indicating the highest probability for a $\delta_{CP}$ of about $\pi/2$. IFAE continued with its contribution to reduce the systematic errors associ- ated to the neutrino cross-sections. IFAE group continue to adapt the neutrino cross-section model based on Local Fermi Gas into the official T2K Monte Carlo generator NEUT and to compare the MC also with experimental data from Mi- nos and Minerva. Figure (c) and (d) show the impact on the implementation when compared with data.
Previous: , Up: Tables in arbitrary syntax   [Contents][Index] Sending and receiving radio lists works exactly the same way as sending and receiving radio tables (see Radio tables). As for radio tables, you can insert radio list templates in HTML, LaTeX and Texinfo modes by calling org-list-insert-radio-list. Here are the differences with radio tables: • - Orgstruct mode must be active. • - Use the ORGLST keyword instead of ORGTBL. • - The available translation functions for radio lists don’t take parameters. • - C-c C-c will work when pressed on the first item of the list. Here is a LaTeX example. Let’s say that you have this in your LaTeX file: % BEGIN RECEIVE ORGLST to-buy Pressing C-c C-c on a new house and will insert the converted LaTeX list between the two marker lines.
Are there tight upper and lower bounds on the density of the sum of 'n' $n$ i.i.d laplace random variables that depend on 'n' $n$ and the individual laplacian densities?
# How do I use the MaNGA Catalog DRPall File? Back to MaNGA tutorials ## The DRPall file The DRPall file is a summary file from the MaNGA Data Reduction Pipeline (DRP). This table contains target and observational information (e.g., number of exposures, IFU size, identifiers), as well as galaxies properties taken from the NSA catalog, such as redshift and photometry for all the galaxies in the current data release. More information about this file can be found here. The datamodel for the DRPall file can be found here. Below we give examples of how to access and use the DRPall data with python, IDL, Marvin, and CAS. The python examples have been tested with python version 3.7 and IDL with version 8.8. ### Accessing the DRPall file with Python If you are less familiar with how to manipulate tables in astropy, please see Astropy primer for working with table data. import numpy as np #importing the needed packages from astropy.io import fits drpall = fits.open('drpall-v3_1_1.fits') #assumes you are in the same directory as the DRPall file tbdata = drpall[1].data # Print column names tbdata.columns.names Find the redshift of a galaxy: In this example, we will find the redshift for a galaxy with the MaNGA ID 12-193481, which just happens to be the galaxy used in the other tutorials. In the DRPall file, there is a column called 'mangaid' that contains the MaNGA IDs for all the observed MaNGA galaxies. The redshift in DRPall comes from the NSA catalog and is the column labelled 'nsa_z'. ind = np.where(tbdata['mangaid'] == '12-193481') #finding the index for our galaxy using the MaNGA ID print('redshift =', tbdata['nsa_z'][ind][0]) #printing the redshift of this index corresponding to our galaxy # should print 'redshift = 0.0402719' Find galaxies that match a specific ancillary catalog: Now we will find which galaxies in DRPall were part of a specific ancillary catalog. For this we use the MaNGA Target 3 bitmask (column name 'mngtarg3') which is described and has a table with the different bits listed here. We are interested in the Dwarf Galaxies, which has the bitname 'Dwarf' and bit number '14'. # select the column for the MaNGA Target 3 bitmask targets = tbdata['mngtarg3'] # select rows from the Dwarf Galaxies with MaNGA ancillary catalog (which is bit 14) dwarf_indices = np.where(targets & 1 << 14)[0] print(dwarf_indices) # Result: #[ 3921 4134 7009 7104 7121 7260 7276 7668 7760 7793 7794 8072 # 8280 8447 8522 8796 8810 8871 8905 8907 9007 9061 9417 9602 # 9757 10134 10255 10283 10284 10419 10506] Identifying unique galaxies: Here we will find the unique galaxies observed with MaNGA. Please note this caveat about some duplicate galaxies with different MaNGA IDs. We will use the MaNGA Target bitmasks 1 and 3 (mngtarg1 and mngtarg3) # Find all main and ancillary target data cubes cube_bools = (tbdata['mngtarg1'] != 0) | (tbdata['mngtarg3'] != 0) cubes = tbdata[cube_bools] # Find galaxies excluding those from the Coma, IC342, M31, and globular cluster ancillary programs (bits 19,20,21,27) targ3 = tbdata['mngtarg3'] galaxies = tbdata[cube_bools & ((targ3 & 1<<19) == 0) & ((targ3 & 1<<20) == 0) & ((targ3 & 1<<21) == 0) & ((targ3 & 1<<27) == 0) ] print('Number of galaxies', len(galaxies)) # Result: Number of galaxies 10296 # Get unique galaxies uniq_vals, uniq_idx=np.unique(galaxies['mangaid'], return_index=True) uniq_galaxies = galaxies[uniq_idx] print('Unique galaxies', len(uniq_galaxies)) # Result: Unique galaxies 10160 Note that this number of unique galaxies does not take into account the data quality. Once galaxies with DRPQUAL3 flags of critical or unusual are removed the number of unique, high quality galaxies in DR17 is 10010. ### Accessing the DRPall file with IDL The DRPall file can be read in using, for example, MRDFITS. This will put the data into an IDL structure. drpall=mrdfits('drpall-v3_1_1.fits',1,hdr) ;Binary table with 99 columns and 11273 rows ; Print column names help,drpall,/str Find the redshift of a galaxy: In this example, we will find the redshift for a galaxy with the MaNGA ID 12-193481, which just happens to be the galaxy used in the other tutorials. In the DRPall file, there is a column called 'mangaid' that contains the MaNGA IDs for all the observed MaNGA galaxies. The redshift in DRPall comes from the NSA catalog and is the column labelled 'nsa_z'. ind = where(drpall.mangaid eq '12-193481') ;finding the index of our galaxy print,'redshift =', drpall[ind].nsa_z ;should print 'redshift = 0.040271900' Identifying unique galaxies: Here we will find the unique galaxies observed with MaNGA. Please note this caveat about some duplicate galaxies with different MaNGA IDs. We will use the MaNGA Target bitmasks 1 and 3 (mngtarg1 and mngtarg3) ; Find all main and ancillary target data cubes cube_bools = (drpall.mngtarg1 ne 0) or (drpall.mngtarg3 ne 0) ; Trim to only these data cubes cubes = drpall[where(cube_bools)] print,'Number of data cubes: ', n_elements(cubes) ;Result: 11236 ; Find galaxies excluding those from the Coma, IC342, M31, and globular cluster ancillary programs (bits 19,20,21,27) galaxies = drpall[where(cube_bools and ((drpall.mngtarg3 and 2L^19L+2L^20L+2L^21+2L^27) eq 0))] ; Select unique galaxies mangaid = galaxies.mangaid uqmangaid = mangaid[uniq(mangaid[sort(mangaid)])] print,'Number of unique galaxies: ', n_elements(uqmangaid) ;Result: 10160 Note that this number of unique galaxies does not take into account the data quality. Once galaxies with DRPQUAL3 flags of critical or unusual are removed the number of unique, high quality galaxies in DR17 is 10010. ### Accessing the DRPall file with Marvin Select Main Sample Galaxies To select the Main Sample galaxies (Primary + Secondary + Color Enhanced samples), we need to download the DR17 drpall file file and put it in the expected \$SAS_BASE_DIR directory. For more information on the location of your SAS_BASE_DIR environment variable, see the Marvin documentation. Let’s open the DRPall file: from marvin import config from marvin.utils.general.general import get_drpall_table config.setRelease('DR17') data = get_drpall_table() The Main Sample consists of the Primary, Secondary, and Color-Enhanced Samples, which correspond to MNGTARG1 bits 10, 11, and 12, respectively. import numpy as np primary = data['mngtarg1'] & 2**10 secondary = data['mngtarg1'] & 2**11 color_enhanced = data['mngtarg1'] & 2**12 main_sample = np.logical_or.reduce((primary, secondary, color_enhanced)) plateifus = data['plateifu'][main_sample] ### Accessing the DRPall file with CAS The DRPall file exists as a table in Catalog Archive Server (CAS). You can query the DRPall table under the Query tab in CASjobs. For example, to find all galaxies with redshift > 0.05 and stellar mass <1e10: SELECT mangaid, objra, objdec, nsa_z, nsa_sersic_mass
# Text along bent arrows in TikZ I want to write some text along bent arrows in TikZ. When I don't bend the arrow everything works as expected, but when I add it, the positioning of the text becomes absurd. Here is a MWE: \documentclass{beamer} \usepackage{tikz} \usetikzlibrary{arrows.meta} \begin{document} \begin{frame}{} \tikzstyle{leaf}=[shape=circle,draw=black,fill=green!20,minimum size=0.01cm] \tikzstyle{pool}=[shape=rectangle,draw=black,fill=blue!20,minimum width=4cm,minimum height=1cm] \begin{figure}[t] \begin{tikzpicture}[overlay,remember picture] \node[pool] (biomass_pool) at (0,-2) {Pool}; \node[leaf] (leaf_1) at (-5.5,1.5) {$x_1$}; \draw [{Latex[length=1.5mm]}-] (leaf_1) -- (biomass_pool) node [pos=.5, above, sloped] (TextNode1) {$q_1$}; \draw [-{Latex[length=1.5mm]},dotted] (leaf_1) to [bend left=5] (biomass_pool) node [pos=.5, below, sloped] (TextNode2) {$d_1$}; \end{tikzpicture} \end{figure} \end{frame} \end{document} and the output: q_1 is correctly placed since the arrow is not bent, but d_1 isn't. Is there a way to fix this? Or any solution to display nicely this double exchange between Pool and x_1? Many thanks. I would do something like this: \documentclass{beamer} \usepackage{tikz} \usetikzlibrary{arrows.meta} \begin{document} \begin{frame}{} \tikzstyle{leaf}=[shape=circle,draw=black,fill=green!20,minimum size=0.01cm] \tikzstyle{pool}=[shape=rectangle,draw=black,fill=blue!20,minimum width=4cm,minimum height=1cm] \begin{figure}[t] \begin{tikzpicture}[overlay,remember picture] \node[pool] (biomass_pool) at (0,-2) {Pool}; \node[leaf] (leaf_1) at (-5.5,1.5) {$x_1$}; \draw [{Latex[length=1.5mm]}-] (leaf_1) to [bend right=30] node [above, sloped] (TextNode1) {$q_1$} (biomass_pool); \draw [-{Latex[length=1.5mm]},dotted] (leaf_1) to [bend left=30] node [above, sloped] (TextNode2) {$d_1$} (biomass_pool); \end{tikzpicture} \end{figure} \end{frame} \end{document} When using to, the node has to be placed right after to, not after the next coordinate, i.e. (a) to node{foo} (b) instead of (a) to (b) node[midway]{foo};. \tikzstyle is I believe considered deprecated by the way. It still works, but the recommended method is \tikzset{style A/.style={...}, style B/.style={...}}. \documentclass{beamer} \usepackage{tikz} \usetikzlibrary{arrows.meta} \begin{document} \begin{frame}{} \tikzset{ leaf/.style={shape=circle,draw=black,fill=green!20,minimum size=0.01cm}, pool/.style={shape=rectangle,draw=black,fill=blue!20,minimum width=4cm,minimum height=1cm} } \begin{figure}[t] \begin{tikzpicture}[overlay,remember picture] \node[pool] (biomass_pool) at (0,-2) {Pool}; \node[leaf] (leaf_1) at (-5.5,1.5) {$x_1$}; \draw [{Latex[length=1.5mm]}-] (leaf_1) -- (biomass_pool) node [pos=.5, above, sloped] (TextNode1) {$q_1$}; \draw [-{Latex[length=1.5mm]},dotted] (leaf_1) to[bend left=5] node [below, sloped] (TextNode2) {$d_1$} (biomass_pool); \end{tikzpicture} \end{figure} \end{frame} \end{document} • do you really need named nodes for edge labels? • is it necessary that tikzpicture has options overlay,remember picture? • is necessary that the image is in figure environment (i don't see caption)? if answers are no, than i would rather use the following solution: \documentclass{beamer} \usepackage{tikz} \usetikzlibrary{arrows.meta, quotes} \begin{document} \begin{frame} \frametitle{My image} \centering \begin{tikzpicture}[%overlay,remember picture, % do you really need this? % auto, leaf/.style={circle,draw,fill=green!20,minimum size=1mm}, pool/.style={draw,fill=blue!20,minimum width=4cm,minimum height=1cm}, Arr/.style={-{Latex[length=1.5mm]}}, ] \node[pool] (biomass_pool) at (0,-2) {Pool}; \node[leaf] (leaf_1) at (-5.5,1.5) {$x_1$}; \draw[Arr] (biomass_pool) to [bend left=30,"$q_1$"] (leaf_1); \draw[Arr,dotted] (leaf_1) to [bend left=30,"$d_1$"] (biomass_pool); \end{tikzpicture} \end{frame} \end{document}
Suggested languages for you: Q19PGA_1 Expert-verified Found in: Page 572 ### Horngren'S Financial And Managerial Accounting Book edition 6th Author(s) Tracie L. Miller-Nobles, Brenda L. Mattison Pages 992 pages ISBN 9780134486833 # Classifying and accounting for debt and equity investmentsJetway Corporation generated excess cash and invested in securities as follows: 2018 Jul. 2 Purchased 4,200 shares of Pogo, Inc. common stock at $12.00 per share. Jetway plans to sell the stock within three months when the company will need the cash for normal operations. Jetway does not have significant influence over Pogo.Aug. 21 Received a cash dividend of$0.80 per share on the Pogo stock investment.Sep. 16 Sold the Pogo stock for $13.40 per share. Oct. 1 Purchased a Violet bond for$20,000 at face value. Jetway classifies the investment as trading and short-term.Dec. 31 Received a $100 interest payment from Violet. 31 Adjusted the Violet bond to its market value of$22,000. Requirements 1. Classify each of the investments made during 2018. (Assume the equity investments represent less than 20% of the ownership of outstanding voting stock.) Both investments will be classified as current assets because they are acquired for a short period. See the step by step solution ## Definition of Outstanding Stock The stock of the shares still in possession of the stockholders is known as outstanding stock. This value is used as a determinant of earnings per share. ## Classification of Each Investment 1. The investment made in Pogo, Inc will be considered as an Equity investment under current assets because the company intends to sell it within three months. 2. The investment made in the bond will be reported as trading debt investment in the current asset section.
# Differential gain of amplifier with current mirror Consider this pdf document, pages 58-59-60. It is a differential amplifier with a current mirror as active load. According to that document, if I take the unbalanced output in the right-hand branch (drain of M2), the transconductance gain is $g_m$, while if I take the unbalanced output in the left-hand branch (drain of M1), the transconductance gain is $g_m / 2$. It is because the current of M2 and the current of the mirror are both entering the M2 drain, as regards the differential mode signal. Let $v_{o1}$ and $v_{o2}$ be respectively the M1 drain voltage and the M2 drain voltage. If $R_{out}$ is the output resistance of this amplifier looking into both $v_{o1}$ and $v_{o2}$, the voltage differential gain is different in the two nodes, being $A'_{v,dm} = g_m R_{out} / 2$ for $v_{o1}$ and $A''_{v,dm} = g_m R_{out}$ for $v_{o2}$. First question: Wasn't this circuit perfectly symmetrical? Moreover: the outputs can be written as $$v_{o1} = A_{v,cm} v_{icm} + A_{v,dm} \displaystyle \frac{v_{idm}}{2}$$ $$v_{o2} = A_{v,cm} v_{icm} - A_{v,dm} \displaystyle \frac{v_{idm}}{2}$$ where the two input were $$v_{i1} = v_{icm} + \displaystyle \frac{v_{idm}}{2}$$ $$v_{i2} = v_{icm} - \displaystyle \frac{v_{idm}}{2}$$ ($v_{icm}$ is the common-mode signal component; $v_{idm}$ is the differential-mode signal component) Second question: What does happen if $A_{v,dm}$ is different between the two output nodes? Should I consider $v_{o1} - v_{o2} = (A'_{v,dm} + A''_{v,dm}) v_{icm}$? First question: No, the circuit isn't perfectly symmetrical. The current mirror performs a differential- to single-ended conversion. If you wanted a perfectly symmetrical circuit, you would make M3 and M4 current sources, and then use some sort of common-mode feedback to set the appropriate current (so that $V_{O1}$ and $V_{O2}$ stay in a usable range). The way it is now, $A'_{v,dm}=\frac{g_m}{2g_{md}}$ (where $g_{md}$ is the transconductance of the diode-connected device M3), while $A''_{v,dm}=g_mr_{out}$. Remember that the impedance seen looking into M3 is simply $\frac{1}{g_{md}}$. These two gains are obviously very different: $A'_{v,dm}\ll A''_{v,dm}$. The gain $A'_{v,dm}$ is so small that it's pretty useless, so the signal $v_{o1}$ is usually ignored, and a single, single-ended output on $v_{o2}$ is used. Second question: I already said it, but $A'_{v,dm}$ is generally so small that $v_{o1}$ is ignored and isn't fed to the next gain stage (or output, or whatever follows). This means that the gain is simply $A_{v,dm}=A''_{v,dm}$. If you really want to, though, you can take both $v_{o1}$ and $v_{o2}$ as outputs to get $A_{v,dm}=A'_{v,dm}+A''_{v,dm}$. • Thank you for your very useful answer. I didn't provide an expression for $R_{out}$, but we can state that looking into output 2 it is about $r_{o4} || r_{o2}$, the output resistances (due to channel length modulation) of M4 and M2. It could be in kOhms. The $R_{out}$ relative to the output 1 instead could be $r_{o1} || 1/g_{m2} || r_{o2}$ and the total resistance could be $\simeq 1/g_{m2}$, so much smaller than kOhms. Is this the reason why you state that $A'_{v,dm} \ll A''_{v,cm}$? – BowPark Apr 19 '15 at 19:29
It turns out to be a favourable week or two for me to finally finish a number of papers that had been at a nearly completed stage for a while.  I have just uploaded to the arXiv my article “Sumset and inverse sumset theorems for Shannon entropy“, submitted to Combinatorics, Probability, and Computing.  This paper evolved from a “deleted scene” in my book with Van Vu entitled “Entropy sumset estimates“.  In those notes, we developed analogues of the standard Plünnecke-Ruzsa sumset estimates (which relate quantities such as the cardinalities $|A+B|, |A-B|$ of the sum and difference sets of two finite sets $A, B$ in an additive group $G$ to each other), to the entropy setting, in which the finite sets $A \subset G$ are replaced instead with discrete random variables $X$ taking values in that group G, and the (logarithm of the) cardinality |A| is replaced with the Shannon entropy ${\textbf H}(X) := \sum_{x \in G} {\Bbb P}(x \in X) \log \frac{1}{{\Bbb P}(x \in X)}.$ This quantity measures the information content of X; for instance, if ${\textbf H}(X) = k \log 2$, then it will take k bits on the average to store the value of X (thus a string of n independent copies of X will require about nk bits of storage in the asymptotic limit $n \to \infty$).  The relationship between entropy and cardinality is that if X is the uniform distribution on a finite non-empty set A, then ${\textbf H}(X) = \log |A|$.  If instead X is non-uniformly distributed on A, one has $0 < {\textbf H}(X) < \log |A|$, thanks to Jensen’s inequality. It turns out that many estimates on sumsets have entropy analogues, which resemble the “logarithm” of the sumset estimates.  For instance, the trivial bounds $|A|, |B| \leq |A+B| \leq |A| |B|$ have the entropy analogue ${\textbf H}(X), {\textbf H}(Y) \leq {\textbf H}(X+Y) \leq {\textbf H}(X) + {\textbf H}(Y)$ whenever X, Y are independent discrete random variables in an additive group; this is not difficult to deduce from standard entropy inequalities.  Slightly more non-trivially, the sum set estimate $|A+B| \leq \frac{|A-B|^3}{|A| |B|}$ established by Ruzsa, has an entropy analogue ${\textbf H}(X+Y) \leq 3 {\textbf H}(X-Y) - {\textbf H}(X) - {\textbf H}(Y)$, and similarly for a number of other standard sumset inequalities in the literature (e.g. the Rusza triangle inequality, the Plünnecke-Rusza inequality, and the Balog-Szemeredi-Gowers theorem, though the entropy analogue of the latter requires a little bit of care to state).  These inequalities can actually be deduced fairly easily from elementary arithmetic identities, together with standard entropy inequalities, most notably the submodularity inequality ${\textbf H}(Z) + {\textbf H}(W) \leq {\textbf H}(X) + {\textbf H}(Y)$ whenever X,Y,Z,W are discrete random variables such that X and Y each determine W separately (thus $W = f(X) = g(Y)$ for some deterministic functions f, g) and X and Y determine Z jointly (thus $Z = h(X,Y)$ for some deterministic function f).  For instance, if X,Y,Z are independent discrete random variables in an additive group G, then $(X-Y,Y-Z)$ and $(X,Z)$ each determine $X-Z$ separately, and determine $X,Y,Z$ jointly, leading to the inequality ${\textbf H}(X,Y,Z) + {\textbf H}(X-Z) \leq {\textbf H}(X-Y,Y-Z) + {\textbf H}(X,Z)$ which soon leads to the entropy Rusza triangle inequality ${\textbf H}(X-Z) \leq {\textbf H}(X-Y) + {\textbf H}(Y-Z) - {\textbf H}(Y)$ which is an analogue of the combinatorial Ruzsa triangle inequality $|A-C| \leq \frac{|A-B| |B-C|}{|B|}.$ All of this was already in the unpublished notes with Van, though I include it in this paper in order to place it in the literature.  The main novelty of the paper, though, is to consider the entropy analogue of Freiman’s theorem, which classifies those sets A for which $|A+A| = O(|A|)$.  Here, the analogous problem is to classify the random variables $X$ such that ${\textbf H}(X_1+X_2) = {\textbf H}(X) + O(1)$, where $X_1,X_2$ are independent copies of X.  Let us say that X has small doubling if this is the case. For instance, the uniform distribution U on a finite subgroup H of G has small doubling (in fact ${\textbf H}(U_1+U_2)={\textbf H}(U) = \log |H|$ in this case). In a similar spirit, the uniform distribution on a (generalised) arithmetic progression P also has small doubling, as does the uniform distribution on a coset progression H+P.  Also, if X has small doubling, and Y has bounded entropy, then X+Y also has small doubling, even if Y and X are not independent.  The main theorem is that these are the only cases: Theorem 1. (Informal statement) X has small doubling if and only if $X = U + Y$ for some uniform distribution U on a coset progression (of bounded rank), and Y has bounded entropy. For instance, suppose that X was the uniform distribution on a dense subset A of a finite group G.  Then Theorem 1 asserts that X is close in a “transport metric” sense to the uniform distribution U on G, in the sense that it is possible to rearrange or transport the probability distribution of X to the probability distribution of U (or vice versa) by shifting each component of the mass of X by an amount Y which has bounded entropy (which basically means that it primarily ranges inside a set of bounded cardinality).  The way one shows this is by randomly translating the mass of X around by a few random shifts to approximately uniformise the distribution, and then deal with the residual fluctuation in the distribution by hand.  Theorem 1 as a whole is established by using the Freiman theorem in the combinatorial setting combined with various elementary convexity and entropy inequality arguments to reduce matters to the above model case when X is supported inside a finite group G and has near-maximal entropy. I also show a variant of the above statement: if X, Y are independent and ${\textbf H}(X+Y) = {\textbf H}(X)+O(1) = {\textbf H}(Y)+O(1)$, then we have $X \equiv Y+Z$ (i.e. X has the same distribution as Y+Z for some Z of bounded entropy (not necessarily independent of X or Y).  Thus if two random variables are additively related to each other, then they can be additively transported to each other by using a bounded amount of entropy. In the last part of the paper I relate these discrete entropies to their continuous counterparts ${\textbf H}_{\Bbb R}(X) := \int_{{\Bbb R}} p(x) \log \frac{1}{p(x)}\ dx,$ where X is now a continuous random variable on the real line with density function $p(x)\ dx$.  There are a number of sum set inequalities known in this setting, for instance ${\textbf H}_{\Bbb R}(X_1 + X_2) \geq {\textbf H}_{\Bbb R}(X) + \frac{1}{2} \log 2$, for independent copies $X_1,X_2$ of a finite entropy random variable X, with equality if and only if X is a Gaussian.  Using this inequality and Theorem 1, I show a discrete version, namely that ${\textbf H}(X_1 + X_2) \geq {\textbf H}(X) + \frac{1}{2} \log 2 - \varepsilon$, whenever $\varepsilon> 0$ and $X_1,X_2$ are independent copies of a random variable in ${\Bbb Z}$ (or any other torsion-free abelian group) whose entropy is sufficiently large depending on $\varepsilon$.  This is somewhat analogous to the classical sumset inequality $|A+A| \geq 2 |A| - 1$ though notice that we have a gain of just $\frac{1}{2} \log 2$ rather than $\log 2$ here, the point being that there is a Gaussian counterexample in the entropy setting which does not have a combinatorial analogue (except perhaps in the high-dimensional limit).  The main idea is to use Theorem 1 to trap most of X inside a coset progression, at which point one can use Fourier-analytic additive combinatorial tools to show that the distribution $X_1+X_2$ is “smooth” in some non-trivial direction r, which can then be used to approximate the discrete distribution by a continuous one. I also conjecture more generally that the entropy monotonicity inequalities established by Artstein, Barthe, Ball, and Naor in the continuous case also hold in the above sense in the discrete case, though my method of proof breaks down because I no longer can assume small doubling.
## Activities: Virtual programming lab mod_vpl Maintained by Juan Carlos Rodríguez-del-Pino VPL is an activity module to manage programming assignments 570 sites 39 fans Moodle 2.7, 2.8, 2.9, 3.0, 3.1, 3.2 VPL- Virtual Programming Lab is a activity module that manage programming assignments and whose salient features are: • Enable to edit the programs source code in the browser • Students can run interactively programs in the browser • You can run tests to review the programs. • Allows searching for similarity between files. • Allows setting editing restrictions and avoiding external text pasting. ### Awards • Thu, 30 Jun 2016, 6:33 PM Hi I would like to know whether editor window can be enlarged. Also want to know the gcc version number. Also in Octave compiler its not supporting imread function. can you clarify regards John Blesswin • Sat, 2 Jul 2016, 4:19 PM Hi John, the editor has a button "fullscreen" (its correct name might be "fullwindow") that lets you use the full window. It hides elements outside the editor. If you also set your browser in fullscreen mode (F11) you will get the maximum room. VPL uses the version of gcc that you installed in your jail server. Please, describe with more details your problem with Octave imread. Beware that the current versión of VPL is incompatible with binary files. As a workaround you can use binary files in base64 format. e.g. if I want to use a file image.jpg, I must convert the file to base64 and save it as image.jpg.b64. During your program execution you will get access to a the file image.jpg. (the system will decode image.jpg.b64 to image.jpg) Best regards. • Wed, 2 Nov 2016, 4:01 AM Hi Juan, Can I use this plugin, along with the jailserver, to run PHP assignments? Thank you, Horatiu • Thu, 3 Nov 2016, 11:54 PM Yes, you can use VPL+jail server to run PHP assignments. VPL 3.1.X default script can run PHP5 CLI but not web programs. VPL 3.2 default script will run PHP to run web applications. But VPL is flexible anought to allow assigments of PHP web applications. You can use and customize the default script of VPL3.2 for PHP and use it as vpl_run.sh https://github.com/jcrodriguez-dis/moodle-mod_vpl/blob/v3.2/jail/default_scripts/php_run.sh If you are using php7 remove php5 from line 10. Best regards, Juan Carlos. • Fri, 4 Nov 2016, 12:14 AM Notice that you must reduce the security level of the firewall in the jail server service configuration to run PHP for a web application. • Thu, 5 Jan 2017, 12:16 AM Happy new year! You could add erlang to the list of languages available for VPL! I've tried vpl for erlang: https://www.erlang.org/ => that's ok (on my server and on the demo server): http://demovpl.dis.ulpgc.es/moodle/mod/vpl/view.php?id=399 It's just a small test. 1) I've changed the vpl_run.sh (for prolog) into: . common_script.sh check_program escript cat common_script.sh > vpl_execution echo "escript $VPL_SUBFILE0" >>vpl_execution chmod +x vpl_execution And 2) test with an erlang program: -module(plusOne). -compile(export_all). plusOne(N)-> N+1. main([]) -> io:format('Give a number (end your line with .)~n',[]), {ok,X} = io:read(':>'), Y=plusOne(X), io:format('~p + 1 => ~p ~n',[X,Y]). That's all. And it works. But I'm not a specialist with Erlang, and not a specialist with VPL. Maybe, there are better ways to do that. Good surprise: the code editor colorized nicely erlang code (without anything to be done). Best for you, Denis B. • Thu, 5 Jan 2017, 12:59 PM hi , when I submitted the c++ program to be compiled or evaluated. It shows the error message "g++: error: unrecognized command line option ‘-fno-diagnostics-color’" how can I fix it? Best for you, James TT • Sat, 7 Jan 2017, 3:20 AM Hello Denis Bouhineau, thank for your contribution. The next mayor release of VPL 3.3 will include a default script for Erlang. Best regards. • Sat, 7 Jan 2017, 3:44 AM Hello tt tt, sorry for the inconvenience. You have two options: 1) Edit run_c.sh run_cpp.sh at mod/vpl/jail/default_scripts/ and remove that args. Also you can replace the C and C++ related scripts with the ones in the VPL 3.1.5 2) Upgrade your C++ compiler. I will try to solve the problem in future releases. Best regards, Juan Carlos. • Sat, 4 Feb 2017, 1:51 PM Hi Juan Carlos Rodríguez-del-Pino, I am using the vpl module of moodle and i need to customize some of its functionalities. I need to run sample testcases directly on RUN button and dont want command prompt to open and enter the input by user or student. so any suggestions and idea u can give on this? • Thu, 16 Feb 2017, 9:15 AM Hello , some one already use vpl to java code with package and two classes in the same package? because, i test this example and i have error. Thank for your help • Thu, 16 Feb 2017, 8:05 PM Sorry Kushal Patel for the late answer. If you want to avoid that the user interact when the students RUN their code you must modify the default script of the language you are using and redirect the standar input to /dev/null. e.g. if you are using java you must get the java default script and copy it to your vpl_run.sh , then replace the line echo "java -enableassertions$MAINCLASS" >> vpl_execution by echo "{" >> vpl_execution echo "java -enableassertions \$MAINCLASS" >> vpl_execution echo "} < /dev/null" >> vpl_execution I recomend to use the "based on" feature. Best regards. • Thu, 16 Feb 2017, 8:11 PM Hello Moussa Sarr, you can use java classes and packages in VPL. To resolve the problem you need to give full detail of your code and how you use it in VPL. Best regards. • Wed, 2 Aug 2017, 1:44 PM Hi all, I am able to run the following c program. but I need to evaluate this program. please let us know if anyone can help me. #include #include #include int main() { int pid, id; if(fork() == 0) { /* open for writing */ printf("From child %d\n",getpid()); raise(2); } pid=wait(&id); printf("From parent \n"); printf("Signal no= %d \n",id &355); //waitpid(-1, NULL, 0); return 0; } • Fri, 4 Aug 2017, 2:42 AM I don't know a easy way to test this type of program. I can imagine making a something similar to a facade pattern that replaces the API you are using to log and test the student's program behavior. But I also think it may be a hard and fragile way to test student's programs. Best regards, Juan Carlos.
# Quantitative Aptitude - Arithmetic Ability Questions ## What is Quantitative Aptitude - Arithmetic Ability? Quantitative Aptitude - Arithmetic Ability test helps measure one's numerical ability, problem solving and mathematical skills. Quantitative aptitude - arithmetic ability is found in almost all the entrance exams, competitive exams and placement exams. Quantitative aptitude questions includes questions ranging from pure numeric calculations to critical arithmetic reasoning. Questions on graph and table reading, percentage analysis, categorization, simple interests and compound interests, clocks, calendars, Areas and volumes, permutations and combinations, logarithms, numbers, percentages, partnerships, odd series, problems on ages, profit and loss, ratio & proportions, stocks &shares, time & distance, time & work and more . Every aspirant giving Quantitative Aptitude Aptitude test tries to solve maximum number of problems with maximum accuracy and speed. In order to solve maximum problems in time one should be thorough with formulas, theorems, squares and cubes, tables and many short cut techniques and most important is to practice as many problems as possible to find yourself some tips and tricks in solving quantitative aptitude - arithmetic ability questions. Wide range of Quantitative Aptitude - Arithmetic Ability questions given here are useful for all kinds of competitive exams like Common Aptitude Test(CAT), MAT, GMAT, IBPS and all bank competitive exams, CSAT, CLAT, SSC Exams, ICET, UPSC, SNAP Test, KPSC, XAT, GRE, Defence, LIC/G IC, Railway exams,TNPSC, University Grants Commission (UGC), Career Aptitude test (IT companies), Government Exams and etc. • #### Volume and Surface Area Q: In an examination, a student scores 4 marks for every correct answer and loses 1 mark for every wrong answer. If he attempts all 60 questions and secures 130 marks, the no of questions he attempts correctly is : A) 35 B) 38 C) 40 D) 42 Explanation: Let the number of correct answers be X. Number of incorrect answers = (60 – X). 4x – (60 – x) = 130 => 5x = 190 => x = 38 183 63036 Q: A can do a piece of work in 10 days, B in 15 days. They work together for 5 days, the rest of the work is finished by C in two more days. If they get Rs. 3000 as wages for the whole work, what are the daily wages of A, B and C respectively (in Rs): A) 200, 250, 300 B) 300, 200, 250 C) 200, 300, 400 D) None of these Explanation: A's 5 days work = 50% B's 5 days work = 33.33% C's 2 days work = 16.66%     [100- (50+33.33)] Ratio of contribution of work of A, B and C =  = 3 : 2 : 1 A's total share = Rs. 1500 B's total share = Rs. 1000 C's total share = Rs. 500 A's one day's earning = Rs.300 B's one day's earning = Rs.200 C's one day's earning = Rs.250 346 61416 Q: What was the day of the week on, 16th July, 1776? A) Tuesday B) Wednesday C) Monday D) Saturday Explanation: 16th July, 1776 = (1775 years + Period from 1st Jan, 1776 to 16th July, 1776) Counting of odd days : 1600 years have 0 odd day. 100 years have 5 odd days. 75 years = (18 leap years + 57 ordinary years) =  [(18 x 2) + (57 x 1)] = 93 (13 weeks + 2 days) = 2 odd days 1775 years have (0 + 5 + 2) odd days = 7 odd days = 0 odd day. Jan   Feb   Mar   Apr   May   Jun   Jul 31 + 29 + 31 + 30 + 31 + 30 + 16 = 198 days= (28 weeks + 2 days) Total number of odd days = (0 + 2) = 2. Required day was 'Tuesday'. 282 61005 Q: A student has to obtain 33% of the total marks to pass. He got 125 marks and failed by 40 marks. The maximum marks are : A) 500 B) 600 C) 800 D) 1000 Explanation: Given that the student got 125 marks and still he failed by 40 marks => The minimum pass mark = 125 + 40 = 165 Given that minimum pass mark = 33% of the total mark => total mark =33/100 =165 => total mark = 16500/33 = 500 262 60657 Q: Two cards are drawn at random from a pack of 52 cards.what is the probability that either both are black or both are queen? A) 52/221 B) 55/190 C) 55/221 D) 19/221 Explanation: We have n(s) =$52C2$ 52 = 52*51/2*1= 1326. Let A = event of getting both black cards B = event of getting both queens A∩B = event of getting queen of black cards n(A) = $52*512*1$ = $26C2$ = 325, n(B)= $26*252*1$= 4*3/2*1= 6  and  n(A∩B) = $4C2$ = 1 P(A) = n(A)/n(S) = 325/1326; P(B) = n(B)/n(S) = 6/1326 and P(A∩B) = n(A∩B)/n(S) = 1/1326 P(A∪B) = P(A) + P(B) - P(A∩B) = (325+6-1) / 1326 = 330/1326 = 55/221 185 60135 Q: Tea worth of Rs. 135/kg & Rs. 126/kg are mixed with a third variety in the ratio 1: 1 : 2. If the mixture is worth Rs. 153 per kg, the price of the third variety per kg will be____? A) Rs. 169.50 B) Rs.1700 C) Rs. 175.50 D) Rs. 180 Explanation: Since first second varieties are mixed in equal proportions, so their average price = Rs.(126+135)/2= Rs.130.50 So, Now the mixture is formed by mixing two varieties, one at Rs. 130.50 per kg and the other at say Rs. 'x' per kg in the ratio 2 : 2, i.e., 1 : 1. We have to find 'x'. Cost of 1 kg tea of 1st kind         Cost of 1 kg tea of 2nd kind x-153/22.50 = 1  => x - 153 = 22.50  => x=175.50. Hence, price of the third variety = Rs.175.50 per kg. 377 59908 Q: A batsman makes a score of 87 runs in the 17th inning and thus increases his average by 3. Find his average after 17th inning? Let the average after 7th inning = x Then average after 16th inning = x - 3 16(x-3)+87 = 17x x = 87 - 48 = 39 59846 Q: Insert the missing number. 7, 26, 63, 124, 215, 342, (....) A) 391 B) 421 C) 481 D) 511
# Math Help - 'Reduction Formula' problem 1. ## 'Reduction Formula' problem I'm just not sure how to go about solving this: In= ∫(x^n).(1-x)^(3/2)dx The integral has limits from 1 -> 0 And I need to show : In = (2n/2n+5) I(n-1) Thanks in advance for any help 2. ## Re: 'Reduction Formula' problem Originally Posted by Mrhappysmile I'm just not sure how to go about solving this: In= ∫(x^n).(1-x)^(3/2)dx The integral has limits from 1 -> 0 And I need to show : In = (2n/2n+5) I(n-1) Integrate by parts, using the fact that $\int(1-x)^{3/2} = -\tfrac25(1-x)^{5/2}$. Then \begin{aligned}I_n &= \Bigl[-\tfrac25(1-x)^{5/2}x^n\Bigr]_0^1 + \int_0^1nx^{n-1}\tfrac25(1-x)^{5/2}dx \\ &= \tfrac{2n}5\int_0^1 x^{n-1}(1-x)(1-x)^{3/2}dx = \tfrac{2n}5I_{n-1} - \tfrac{2n}5I_n. \end{aligned} 3. ## Re: 'Reduction Formula' problem Originally Posted by Opalg Integrate by parts, using the fact that $\int(1-x)^{3/2} = -\tfrac25(1-x)^{5/2}$. Then \begin{aligned}I_n &= \Bigl[-\tfrac25(1-x)^{5/2}x^n\Bigr]_0^1 + \int_0^1nx^{n-1}\tfrac25(1-x)^{5/2}dx \\ &= \tfrac{2n}5\int_0^1 x^{n-1}(1-x)(1-x)^{3/2}dx = \tfrac{2n}5I_{n-1} - \tfrac{2n}5I_n. \end{aligned} Of course! I'd done exactly what you'd done but I didn't seen the simple bit i.e. dividing through by $5+2n$ to give the answer. Thanks
Relationship between congruence and kernel of homomorphism First the terms as I understand them: 1. The congruence is defined as equivalence that preserves structure (the operations) of algebra. 2. Homomorphism is a map f: A->B between two algebraic structures of the same type, that preserves structure. This means that it doesn't matter if we map the result of operation, or map the operands and then perform operation. For example f(a+b) = f(a)+f(b); a,b \in A. 3. By this definition: Kernel of homomorphism is a set of elements from A which are mapped to neutral element of B. For example if the map is from (R^2,*) -> (R,*) (2D vectors to real number) defined as f(x) = |x| (length of vector), all vectors that would map to 1 (which is neutral in (R,*)) would make up the kernel. Therefore the kernel is set of all unit vectors. 4. From other definition: Generally, the kernel is a congruence relation. Regarding this, I have following questions: 1. Is there anything wrong with the definitions or examples above? 2. How can it be, that generally the kernel of map is a congruence, but kernel of homomorphism is a set of elements from A? Shouldn't be these 2 distinct? What is the relationship between them? 3. How would a quotient algebra defined as A/(kerF), where kerF is kernel of homomorphism look like in example from third definition? How is it different from quotient algebra where kerF is general kernel from fourth definition? • In the example, what multiplication are you using on $\mathbb R^2$? Multiplication as complex numbers? A quick note: The (linear algebra/group theory) kernel is NOT the congruence relation, but defines one, by setting $a \sim b$ whenever $a-b \in f^{-1}(0)$ (or $ab^{-1} \in f^{-1}(1)$ respectively). This needs an operation of negation/inversion. You can however always get a congruence relation from a homomorphism $f$ by setting $a \sim b$ iff $f(a) = f(b)$. – Georg Lehner Nov 17 '15 at 21:12 • Yes the multiplication should be as with complex numbers. Sorry for not mentioning that. Regarding the second part - I'm kind of lost in this. So, what is the kernel. Is it a set of elements from A (always)? And how can the kernel 'define' congruence? In what manner? – Raven Nov 17 '15 at 21:25 In "nonlinear" contexts the correct replacement of the kernel is a construction called the kernel pair. The kernel pair of a morphism $f : X \to Y$ is abstractly the pullback $X \times_Y X$. Concretely, in familiar concrete categories, it can be described as the subset $$\{ (x, x') \in X \times X : f(x) = f(x') \}$$ of $X$. This subset describes an equivalence relation on $X$, and in the case of a morphism $f : G \to H$ of groups, the equivalence relation is that $x$ and $x'$ differ by an element of the kernel $\text{ker}(f)$ in the usual sense. With a little more effort you can show that this sets up a natural bijection between congruences on a group and normal subgroups; see this blog post for some details. The kernel pair is equipped with two projection morphisms to $X$, and the correct replacement of quotienting by the kernel is taking their coequalizer.
## Update II [R for BE/BA] After re-re-re-re-reading the documentation for optim, I think it can be forced to look for either a minimum (default) or for a maximum if need be through the control argument or, more simply , the objective function should be handled for a minimum by putting a minus sign on the return of Obj.F12... ... but even so the estimates are still somewhat off. I could be wrong, but... Best regards, ElMaestro "Pass or fail" (D. Potvin et al., 2008)
# Distribusyong-t ni Student Parameters Probability density function Cumulative distribution function ${\displaystyle \nu }$ > 0 degrees of freedom (real) x ∈ (−∞; +∞) ${\displaystyle \textstyle {\frac {\Gamma \left({\frac {\nu +1}{2}}\right)}{{\sqrt {\nu \pi }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\left(1+{\frac {x^{2}}{\nu }}\right)^{-{\frac {\nu +1}{2}}}\!}$ ${\displaystyle {\begin{matrix}{\frac {1}{2}}+x\Gamma \left({\frac {\nu +1}{2}}\right)\cdot \\[0.5em]{\frac {\,_{2}F_{1}\left({\frac {1}{2}},{\frac {\nu +1}{2}};{\frac {3}{2}};-{\frac {x^{2}}{\nu }}\right)}{{\sqrt {\pi \nu }}\,\Gamma \left({\frac {\nu }{2}}\right)}}\end{matrix}}}$where 2F1 is the hypergeometric function 0 for ${\displaystyle \nu }$ > 1, otherwise undefined 0 0 ${\displaystyle \textstyle {\frac {\nu }{\nu -2}}}$ for ${\displaystyle \nu }$ > 2, ∞ for 1 < ${\displaystyle \nu }$ ≤ 2, otherwise undefined 0 for ${\displaystyle \nu }$ > 3, otherwise undefined ${\displaystyle \textstyle {\frac {6}{\nu -4}}}$ for ${\displaystyle \nu }$ > 4, ∞ for 2 < ${\displaystyle \nu }$ ≤ 4, otherwise undefined ${\displaystyle {\begin{matrix}{\frac {\nu +1}{2}}\left[\psi \left({\frac {1+\nu }{2}}\right)-\psi \left({\frac {\nu }{2}}\right)\right]\\[0.5em]+\log {\left[{\sqrt {\nu }}B\left({\frac {\nu }{2}},{\frac {1}{2}}\right)\right]}\end{matrix}}}$ undefined ${\displaystyle \textstyle {\frac {K_{\nu /2}\left({\sqrt {\nu }}|t|)({\sqrt {\nu }}|t|\right)^{\nu /2}}{\Gamma (\nu /2)2^{\nu /2-1}}}}$ for ${\displaystyle \nu }$ > 0 ${\displaystyle K_{\nu }}$(x): Bessel function[1]
# Cullen number A $C_{m}$ is a number of the form $2^{m}m+1$, where $m$ is an integer. Almost all Cullen numbers are composite, and they have $2m-1$ as a factor if that number is a prime of the form $8k-3$ (with $k$ a positive integer). The first few Cullen numbers are 1, 3, 9, 25, 65, 161, 385, 897, 2049, 4609 (listed in A002064 of Sloane’s OEIS). With the exception of 3, no Proth number is also a Cullen number. Title Cullen number CullenNumber 2013-03-22 17:21:34 2013-03-22 17:21:34 PrimeFan (13766) PrimeFan (13766) 5 PrimeFan (13766) Definition msc 11A51 WoodallNumber
# How to color the area under a curve using tikz datavisualization? normally i'm using plain tikz for curve plotting. I need "school book style" coordinate systems with a 50mm grid. I tried using the tikz data visualization library. \documentclass{article} \usepackage{tikz,pgfplots} \usetikzlibrary{datavisualization} \usetikzlibrary{datavisualization.formats.functions} \pagestyle{empty} \begin{document} \begin{tikzpicture}[] \datavisualization [ school book axes={unit=0.5}, visualize as smooth line, x axis={label={$x$},grid,grid={minor steps between steps=1}}, y axis={label={$y$},grid,grid={minor steps between steps=1}}, every major grid/.style = {style={gray, thin}}, every minor grid/.style = {style={gray, very thin}} ] data [format=function] { var x : interval [-2:2]; func y = 1/2*(\value x)^2; } info' { \fill[fill=lightgray] (visualization cs: x=1, y=0) -- plot [domain=1:2] (visualization cs: x=\x,y={0.5*(\x)^2}) -- (visualization cs: x=2, y=0) --cycle; } ; \end{tikzpicture} \end{document} The only thing i did not get right, is coloring the area under a curve. I tried the following in the info' block: \fill[fill=lightgray] (visualization cs: x=1, y=0) -- plot [domain=1:2] (visualization cs: x=\x,y={0.5*(\x)^2}) -- (visualization cs: x=2, y=0) --cycle; But this results in a PGF Math Error. Package PGF Math Error: Could not parse input '0.5*(1)^2' I guess i need a way to tell the plot command to use the visualization cs. Cheers Better create a new visualizer different from visualize as line. So I copy the definition of the latter from tikzlibrarydatavisualization.code.tex. The only different is that every path/.style={draw}, is changed to every path/.style={draw,fill}, \documentclass[tikz,border=10pt]{standalone} \usetikzlibrary{datavisualization,datavisualization.formats.functions} \begin{document} \makeatletter \tikzdatavisualizationset{ visualize as pie/.style={ new object={ when=after survey, store=/tikz/data visualization/visualizers/#1, class=plot handler visualizer, arg1=#1, arg2={\tikz@dv@plot@handler,\tikz@dv@plot@mark@maker} }, new visualizer={#1}{% every path/.style={draw,fill}, style={every mark/.append style={color=visualizer color}}, mark size=2pt, semithick, color=visualizer color, mark=none, /tikz/data visualization/every visualize as line/.try, }{visualizer in legend=\tikz@dv@legend@entry@as@example}, #1={straight line} }, visualize as pie/.default=pie, } \begin{tikzpicture} \datavisualization[ school book axes={unit=0.5}, x axis={label={$x$},grid,grid={minor steps between steps=1}}, y axis={label={$y$},grid,grid={minor steps between steps=1}}, every major grid/.style={style={gray,thin}}, every minor grid/.style={style={gray,very thin}}, visualize as pie ] data point[x=-2, y=0] data[format=function]{var x :interval [-2:2];func y =1/2*(\value x)^2;} data point[x=2, y=0]; \end{tikzpicture} \end{document} • So i can overlay both visualizers to get a colored area from a to b. Thank you! Can you give me a hint where i can specify the fill color/pattern? – cw79 Feb 15 '15 at 17:19 • I believe it's generally good practice to close your "hacks" with \makeatother. – Radon Rosborough Aug 1 '15 at 21:31 Not exactly using visualization but pgfplots with its fillbetween library. This will look easy. \documentclass{article} \usepackage{pgfplots} \pgfplotsset{compat=1.11} \usetikzlibrary{fillbetween} \pagestyle{empty} \begin{document} \begin{tikzpicture}[] \begin{axis}[ grid=both, ymin=0, xmin=-3,xmax=3, axis on top ] \end{axis} \end{tikzpicture} \end{document} I have first filled the region and the plotted the curve so as to avoid the lines around the fill. Also, axis on top helps in keeping the x axis on top in this case. • I have rather rigid requirements for my plots. I need a 50mm grid in print, centered axes and the option to draw figures (e.g. triangles, rectancles, circles) onto the coordinate system. – cw79 Feb 15 '15 at 14:03 • @cw79 All those can be done within this frame work. :-) – user11232 Feb 15 '15 at 14:05 Using coordinate calculation (\usetikzlibrary{calc}) one can do a coordinate transformation. Unfortunately, I only found this kind of messy transformation, maybe someone can post a cleaner way to do this. info' { \begin{scope}[shift={(visualization cs:x=0, y=0)}, x={($(visualization cs:x=1, y=0)-(visualization cs:x=0, y=0)$)}, y={($(visualization cs:x=0, y=1)-(visualization cs:x=0, y=0)$)}] \fill[fill=lightgray] (1,0) -- plot [domain=1:2] (\x,{0.5*(\x)^2}) -- (2, 0) --cycle; \end{scope} }
Article Text Myocardial molecular biology: an introduction Free 1. Nigel J Brand, 2. Paul J R Barton 1. Correspondence to: Dr Nigel J Brand, National Heart and Lung Institute, Faculty of Medicine, Imperial College of Science Technology and Medicine, Dovehouse Street, London, SW3 6LY, UK; n.brand{at}ic.ac.uk ## Statistics from Altmetric.com The recent publication of draft copies of the human genome sequence from both public and private sector consortia has fuelled anticipation that eventually, once all genes have been identified, we will be able to ascertain which of them are involved in human diseases, including those affecting the cardiovascular system. Understanding the molecular biology behind both inherited and acquired disorders is now viewed as essential to provide a full picture of the aetiology and progression of disease. Within the past decade considerable advances have been made in identifying the genetic basis of myocardial disorders such as familial hypertrophic cardiomyopathy and dilated cardiomyopathy, as well as the molecular signalling pathways and gene regulatory events that characterise acquired disease such as pressure overload induced cardiac hypertrophy. Furthermore, by defining the molecular processes underlying normal development we may be able to manipulate immature cell phenotypes such as those of embryonic stem cells or skeletal myoblasts to replace damaged, terminally differentiated cells such as cardiac myocytes. In this review we outline the basic principals of gene expression, the different mechanisms by which expression is regulated and how these can be examined experimentally. ## DNA MAKES RNA MAKES PROTEIN The blueprint for any organism is contained within its genome in the form of chromosomes and is written in the universal four “base” language of adenine (A), guanine (G), cytosine (C), and thymine (T). Chromosomes are built of chromatin, double stranded DNA wrapped around a multi-protein complex core comprised of histone proteins. This DNA contains the language (DNA or nucleotide sequence) that can be read and translated into proteins, and these areas of DNA are called genes.1, 2 In higher organisms, ranging from yeast to plants and man, practically all genes are interrupted, with sequences coding for protein (coding exons) separated by regions of non-coding DNA called introns. The beginning and ends of genes are usually marked by exons that do not code for parts of the protein, the so called non-coding exons. Within coding exons, contiguous groups of three bases (codons), form the genetic code. The 64 (43) individual codons specify for the 20 amino acids from which proteins are made, or signal the start (initiator methionine codon) or end (stop codon) of translation. It is therefore the contiguous order of the codons within a gene that delineate the linear amino acid sequence of the protein produced. In general, gene expression describes the production of RNA and, subsequently, protein from a gene. This process can be split into three major parts: transcription of the gene in the nucleus to make primary RNA, splicing of the primary transcript to form the mature messenger RNA (mRNA) and translation of mRNA in the cytoplasm to produce the protein for which the gene codes (fig 1). Transcription is carried out by an enzyme called RNA polymerase II (RNA pol II) under the direction of specialised basal transcription factors that form a multi-protein complex with RNA pol II on the gene promoter.1 The promoter contains the start site of transcription, usually designated +1, which marks the beginning of the first exon of the gene and hence corresponds to the first nucleotide of the mRNA. Binding sites for various transcription factors, which are DNA binding proteins with highly specific affinities for particular DNA sequences, sequester transcription factors to the gene where they participate in boosting (or, in some cases, repressing) the level of transcription. Transcription factors may bind within the promoter or lie within areas called enhancers that are located distally—usually upstream, but occasionally downstream, of the promoter. Considerable effort has focused on identifying promoter and enhancer sequences responsible for directing gene transcription and the identification of the factors that act on them.3 Once bound to their cognate DNA sequences, transcription factors help drive the rate at which RNA pol II initiates fresh rounds of transcription. The polymerase moves along the gene making a primary RNA copy of one strand of the DNA duplex, copying both exonic and intronic sequences. This primary RNA transcript is subsequently processed to remove the intron derived sequences and the exons joined together (RNA splicing). Following some 5 and 3 modifications, such as the addition of a 3 poly-adenylic acid tract (polyA tail), the final mRNA product is exported from the nucleus to the cytoplasm, where it serves as a template for the production of protein by the ribosomes. Subsequent post-translational modifications such as cleaving off any propeptide or leader sequences which direct the protein to its ultimate destination in the cell or the attachment of phosphate or acetyl groups to specific amino acid residues may be necessary to produce the final functional form of the protein. Figure 1 The process of gene expression. Chromosomes are scaffolds of DNA organised around protein (histones) in units called nucleosomes. DNA is unwound from histones before transcription of a gene by RNA polymerase II and transcription factors (coloured). The primary RNA transcript, which is a copy of both exonic (red) and intronic (blue) DNA sequences, is processed subsequently to remove intronic sequences (mRNA splicing). The resulting mRNA is then exported to the cytoplasm for translation and subsequent post-translational modification such as methylation, glycosylation or phosphorylation. Detection of mRNA by northern blot is illustrated: the blot shows that mRNA for the slow skeletal isoform of troponin T (TnTs) is expressed only in adult skeletal muscle (Sk) and not in fetal (F) or adult (H) heart or liver (L). Rehybridisation of the blot with a probe to 18S rRNA (18S) shows presence of RNA in each lane. Protein expression is analysed by western blotting: in the example shown, a universal antibody recognising all three troponin I (TnI) isoforms shows distribution of fast skeletal (f), slow skeletal (s) and cardiac (c) isoforms in adult skeletal muscle (Sk) and fetal heart (F). Figure courtesy of KA Dellow. ## CONTROL POINTS FOR GENE EXPRESSION All of the stages of gene expression are points at which regulation can be exerted. However, the primary point of control is at the level of transcription. Many promoters and enhancers of myocardial genes have been cloned and transcription factors active in heart and belonging to a variety of gene families have been identified in recent years (table 1).4–9 The overwhelming observation that can be drawn is that expression of individual genes is regulated (1) through the coordinate binding of different types of transcription factors, (2) by interactions between factors and with ancillary co-factors such as histone acetylases (HATs) or deacetylases (HDACs), which do not necessarily bind to DNA themselves, and (3) through signal transduction pathways which influence their activity by, for example, phosphorylation.3 Most transcription factors are modular in structure, containing separable protein domains that carry out a particular function such as DNA binding, dimerisation with other family members (for example, the related bHLH proteins HIF-1α and β, products of separate genes9) or serving as transcriptional activation domains (TADs) to promote high level transcription. Ultimately, once bound to DNA transcription factors interact with other bound proteins and act to increase the rate of RNA synthesis. TAD activity, which is often measured by introducing cloned transcription factors into cells grown in culture (see below) may be intrinsic or may reflect the binding of a co-activator or co-factor protein which itself possesses significant activation properties. For example, myocardin is a recently identified heart restricted co-factor for the ubiquitously expressed serum response factor (SRF).6 SRF binds to the promoters of several genes expressed in heart, including cardiac α actin, and has been shown to interact with many factors including the homeobox factor Nkx-2.5 and the zinc finger factor GATA-4 to regulate expression. In contrast to Nkx-2.5 and GATA-4, which are transcriptional activators in their own right, myocardin does not bind to DNA but complexes with bound SRF and serves as an extremely potent co-factor for transcription, promoting up to a thousand-fold activation in combination with SRF. Table 1 Some examples of key transcription factors expressed in the heart Transcription factors may be expressed in a highly tissue restricted manner or at particular developmental stages, in turn regulating the expression of their target genes. For example, GATA factors are expressed from the earliest detectable stages of cardiogenesis and may play a role in gene regulation at this stage.10 Later, Nkx2.5 shows regional variation in expression and may play a specific role in the developing myocardial conduction system. Most transcription factors can be grouped into gene families on the basis of sequence similarity in regions of functional importance, such as a DNA binding domain or a protein dimerisation interface. Such a high degree of sequence homology allows new family members to be discovered by searching DNA or protein sequence databases across diverse phyla. In this way Nkx-2.5 was identified as the mammalian homologue of a Drosophila gene called tinman, (named after one of the characters in The Wizard of Oz) originally identified as a mutation that resulted in lack of development of the heart equivalent in the fly, the dorsal vessel.8 In humans, mutations in the Nkx-2.5 gene have been correlated with a variety of cardiac anomalies including tetralogy of Fallot and idiopathic atrioventricular block.11 Currently, there is renewed interest in the role that chromatin structure plays in regulating gene expression.12 It has long been known that when DNA is wrapped around histones in the form of chromatin, gene activity is silenced, and that the localised unwinding of the DNA from chromatin, accompanied with histone displacement, is vital to allow gene expression to progress. Central to recent studies has been the identification and biochemical characterisation of proteins that possess HAT or HDAC activity, adding or removing, respectively, acetyl groups from exposed lysine residues on histones. HAT activity correlates with activation of gene expression, while HDAC activity results in repression.13 In several cases the functions of these proteins have been shown to be intimately linked to the state of transcription factors binding to target genes. For example, the active heterodimeric form of the basic helix-loop-helix transcription factor HIF-1 (table 1) senses changes in partial pressure of oxygen and thus acts as a hypoxia sensor in several systems, including angiogenesis and vascular remodelling.9 Once bound to the DNA of target genes for regulation, C terminal TADs in the HIF heterodimer interact with transcriptional co-activators such as the CREB-binding protein, CBP. (In cardiac muscle, the most likely co-activator is the related protein p300). These large proteins possess intrinsic chromatin remodelling activity by recruiting to the DNA still more proteins which allow chromatin to unwind. Probably the best understood system is currently that involving retinoid and steroid hormone receptors such as thyroid hormone receptor α1 (TRα1) that, once it has bound its cognate ligand T3, activates expression of genes such as cardiac α myosin heavy chain.2 The binding of T3 to the hormone binding domain of TRα1 results in a conformational change in the proteins' structure, allowing the receptor to interact with co-activators. The net result is that HAT activity promotes localised unwinding of chromatin, allowing access of RNA pol II and basal transcription factors to the DNA. In the absence of T3, the nuclear receptor still binds to DNA but interacts instead with co-repressors such as N-CoR and SMRT, which then recruit HDACs to the DNA, leading to chromatin condensation and repression of gene expression.14 The activation of transcription factors by phosphorylation is a focal point for transducing extracellular stimuli through MAP kinase signal transduction pathways. For example, CBP associates only with the phosphorylated form of the CREB protein, a transcription factor that binds the cyclic AMP response element found in many gene promoters. Once bound to CREB, CBP forms protein–protein interactions with the basal transcription factor TFIIB, allowing transcription by RNA pol II to progress. Members of the myocyte enhancer factor-2 (MEF-2) family of transcription factors (table 1), which are widely expressed but appear to be enriched and have particular roles in skeletal and cardiac muscle, may regulate changes in gene expression arising from a hypertrophic stimulus. MEF-2 proteins have been implicated as responders to MAP kinases activated by hypertrophic stimuli in cardiac myocytes.15 The hypertrophic agonists endothelin-1 and phenylepherine activate p38 MAPKs in cultured rat neonatal cardiac myocytes16, and in vivo p38 activity increases in aortic banded mice which go on to develop pressure overloaded hypertrophy (Wang and colleagues 1998, cited in Han and Molkentin15). In rat, a similarly induced hypertrophy results in an increase in DNA binding activity of MEF-2.17 As development of hypertrophy is associated with changes in transcription of various myocardial genes which require MEF-2, this suggests that MEF-2 may be a direct target for MAP kinase signalling (fig 2). Figure 2 Linking MEF2 to hypertrophy. Hypertrophic stimuli activate intracellular signalling pathways. Upstream protein kinases (for example, MEKKs) activate MAP kinase family members (MKKs) which in turn phosphorylate the three MAP kinases p38-MAPK, JNKs, and ERKs. MAP kinases have been linked to the phosphorylation of certain transcription factors, for example, p38-MAPK has been shown to activate MEF2 family members (see text for details). This implicates MEF2 proteins as direct transducers of intracellular signalling pathways to bring about some, or all, of the changes in gene expression associated with hypertrophy. Figure courtesy of KA Dellow. A second point at which gene expression can be controlled is at the level of mRNA splicing. The primary RNA transcript of many genes may be alternatively spliced in that certain exons may be excluded or included from the transcript to produce the final mRNA. In this way, multiple proteins may be generated from a single gene according to the combination of exon derived RNA segments that are spliced together. The vertebrate tropomyosin (TM) genes are examples of genes expressed in both cardiac and skeletal muscle that are subject to complex patterns of alternative mRNA splicing. For example, two isoforms of the α-TM gene in Xenopus, differ in their inclusion of alternative 3untranslated region exons and show restricted expression in the embryo.18 The XTMα7 isoform is found in the somites, from which the skeletal muscle develops, whereas the XTMα2 isoform is expressed in both somites and embryonic heart. In the adult, XTMα2, but not XTMα7, is selectively expressed in striated muscle and heart. Following translation of the mRNA into polypeptide, production of mature active protein may require several steps, each of which is open to regulation. This is well illustrated by the matrix metallproteinases (MMPs) which are believed to play a key role in myocardial remodelling.19 MMPs are produced in an inactive form (as a zymogen) which requires cleavage to produce active enzyme and are further regulated by specific inhibitory molecules (tissue inhibitors of metalloproteinases or TIMPs). Examining MMP gene expression at the level of RNA is of importance to understanding gene regulation, but may not therefore be a good indicator of MMP enzyme activity. Clearly, it is important to understand the biological system in question before deciding which is the most relevant level of regulation. For the purposes of this review we will be focusing on the initial stage of gene expression, namely transcription, and the methods of examining this as well as monitoring RNA content. ## DISSECTING PROMOTER FUNCTION In order to understand how genes are regulated in the heart, many gene promoters have been isolated and characterised with regard to the key regulatory DNA sequences they harbour and the transcription factors that bind them. A widely used method of measuring promoter function is to insert (clone) the promoter and various lengths of upstream sequence into an artificial plasmid-based construct in front of a reporter gene whose expression can be easily monitored. Typically the firefly gene luciferase or the bacterial genes chloramphenicol acetyl transferase (CAT) and β-galactosidase (LacZ) are used for this purpose. Constructs can be introduced into cells in culture by a variety of transfection techniques, or into whole animals using transgenic approaches. By careful choice of expression constructs containing progressively smaller deletions, the positions of DNA sequences responsible for high level transcription, tissue specificity or specific responses to stimuli (for example, stretch or agonist induced hypertrophy) can be found. The binding sites for candidate transcription factors can then be pinpointed accurately through mutagenesis of individual nucleotides to test the validity of those sequences. In this way, we and others have dissected the promoter of the cardiac troponin I gene, which is only ever expressed in cardiac muscle. Deletion analysis of the human gene promoter and upstream sequences has revealed several important transcription factor binding sites within 100 nucleotides upstream of +1 (the conventional notation for promoter sequence is negative numbering running upstream from the transcription start site). Among these is an A/T rich element centred around −30 that binds the TATA-box factor, TBP, the octamer protein Oct-1, and several MEF2 proteins. There are two binding sites in the human TnIc gene for GATA-4 and a C-rich sequence around −95 that encompasses both binding sites for the zinc finger factor Sp1 and a CACC-box with the sequence CCCACCCC.20 Mutation of each site in TnIc promoter-CAT reporter constructs results in a 50–95% reduction in transcriptional activity when transfected into cultured cardiac myocytes, suggesting that each site serves to bind proteins involved in maximal transcriptional activity. The identity of these proteins was characterised using the electrophoretic mobility shift assay (EMSA), also known as band shift assay. In this assay, a radiolabelled double stranded DNA fragment containing a putative binding site for a transcription factor (usually referred to as a cassette and generated by annealing short synthetic oligonucleotides corresponding to the two complementary strands of DNA) is mixed with an extract of nuclear proteins prepared from cells or tissue. If a protein binds the cassette, it will retard its migration compared to unbound cassette when subjected to electrophoresis through a non-denaturing polyacrylamide gel on account of the added mass of the protein. Mutagenesis of individual nucleotides in the cassette or titration of unlabelled competitor cassettes in molar excess enable us to analyse the specificity of interaction between factor and DNA. By incubating the protein–DNA complex with antibody to putative factors, a “supershift” complex can be obtained (due to the added mass of the antibody) and the identity of bound proteins thereby confirmed (fig 3). Furthermore, by using nuclear extracts from different cells, an indication of the distribution of the factor can be determined. For example, our group has recently found shown that of four proteins binding the human TnIc CACC-box, two are members of the widely expressed Sp family of zinc finger factors while the other two appear to be expressed only in cardiac myocytes.21 These experiments therefore identify the regions involved and the factors they bind. Similar experiments in mouse have been taken further by showing that only 230bp of the mouse TnIc promoter are necessary to drive cardiac restricted expression of a LacZ reporter gene in transgenic mice.22 Figure 3 Visualising DNA–protein interaction by electrophoretic mobility shift assay (EMSA). A double stranded oligonucleotide cassette containing a consensus binding site for GATA factors (A/TGATAA/G) was radiolabelled and incubated with nuclear protein extracts from neonatal cardiac myocytes. Lane 1: A specific GATA-4-DNA complex is formed (solid arrowhead) which migrates behind (hence mobility shift) the free probe (FP). Lane 2: The identity of the bound protein is confirmed by addition of a specific antibody which results in a DNA/protein/antibody complex (supershift) (arrow). Specificity of binding can be demonstrated by mutation of the oligonuleotide sequence which results in loss of GATA-4 binding—not shown (see Dellow and colleagues21). The role of specific factors in regulating a promoter can be assessed by simultaneous introduction (co-transfection) of suitable promoter-reporter constructs with an expression construct encoding the factor(s) in question. For example, the role of GATA-4 in regulating cardiac specific expression has been examined in a number of contexts. In an elegant experiment a reporter construct containing the promoter and upstream sequences of the α-MHC gene, which contains two binding sites for GATA-4, was barely active when injected into skeletal muscle, which lacks endogenous GATA-4.23 However, expression could be boosted fourfold by co-injection of an expression vector for GATA-4 (fig 4). In contrast, a mutant reporter construct in which both GATA-4 binding sites had been mutated exhibited only 12% of the activity of the wild-type construct. Figure 4 GATA-4 directs expression of an α-MHC-CAT reporter when co-injected into skeletal muscle, which lacks endogenous GATA-4. Neither GATA-4 nor the cardiac α-MHC gene, which contains GATA-4 binding sites in its promoter, are normally expressed in skeletal muscle. Co-injection of a promoter-reporter construct gene containing a CAT gene under control of the wild type (WT) α-MHC promoter together with a GATA-4 expression vector results in a fourfold increase in reporter activity compared to in the absence of GATA-4. Mutation of the GATA-4 binding sites (mut) abrogates activation by the transcription factor (see Molkentin and colleagues23). ## METHODS FOR MEASURING GENE EXPRESSION Many techniques have been established for studying gene structure and expression. In particular, a variety of methods for measuring mRNA have been developed that allow abundance, tissue specificity, and developmental expression to be followed. The relative advantages and disadvantages of the major techniques are shown in table 2. Table 2 Comparison of some commonly used methods for RNA detection ### Southerns, northerns, and westerns Electrophoresis of DNA or RNA molecules through an agarose or polyacrylamide gel matrix form the backbone of some key molecular biology techniques as the speed and distance that molecules migrate is in direct proportion to their size. Hence, DNA or RNA fragments can be easily separated according to size.2 Denaturing and transferring DNA fragments out of gel onto a membrane of similar dimensions to the gel is known as Southern blotting, so called in deference to its inventor, Ed Southern. The blot preserves the spatial distribution of the separated fragments and fixes them permanently on the membrane. Thus, we can separate out individual molecules from a complex starting mixture such as total genomic DNA digested with restriction enzymes and look at specific DNA sequences by hybridising the resulting Southern blot with gene specific probes. Often, the high degree of conservation between the same gene from different species can be exploited to use a probe to hybridise across species on a Southern blot. For example, a new gene from heart may be identified in mouse and we would wish to analyse the gene structure of its counterpart in man. As many genes belong to extensive gene families related in terms of DNA or protein sequence (homology), this can be exploited to use one gene to identify a related family member, as in the earlier cited example of using the fly gene tinman to isolate its vertebrate homologue Nkx-2.5.24 The majority of techniques for studying the expression of a gene take messenger RNA as the starting material. As with DNA analysis by Southern blotting, RNA can be studied by northern blotting (fig 1). In Northern blotting, RNA molecules are resolved on agarose gels and a DNA probe for the gene of interest is then used to identify expression of the gene and to gauge the size of the transcript. By including RNA samples from various human tissues on a blot it is possible to determine in which tissues in the body a particular gene is expressed. Various methods of measuring RNA are described in table 2. The expression profile of proteins can be determined in a similar way by resolving a protein mixture by electrophoresis on a polyacrylamide gel matrix under denaturing conditions (to ensure proteins migrate according to their molecular mass). A modified blot procedure (western blotting) is used to transfer the proteins to a membrane, the proteins are then renatured and target protein(s) identified using specific antibodies (fig 1). Analysis of protein expression is an important aspect to understanding the constituents and function of the cell, and modern techniques of analysis, referred to generically as proteomics, have been reviewed elsewhere.25 ### Polymerase chain reaction One of the most useful innovations in molecular biology in the last few decades has been the development and continued refinement of the polymerase chain reaction (PCR). This technique is an enzymatic reaction which allows the exponential amplification of specific DNA sequences from vanishingly small amounts of starting material, often leading to an amplification of a million-fold or more.26 PCR amplification works by successive rounds of thermal denaturation at temperatures above 90°C to separate the two strands of the DNA template, cooling to allow annealing of oligonucleotide primers with their target sequences, and elongation by a thermostable DNA polymerase at 72°C (whereby new DNA is synthesised by extension of the annealed primers), thereby making two copies of the original target. Each round of PCR results in a doubling of the target DNA. The amplification process of PCR makes it well suited to identifying gene expression in RNA, particularly prepared from biopsies or similar limiting sources of material. RNA is prepared and then converted to cDNA (reverse transcription) to provide the initial DNA template for PCR amplification. Small, chemically synthesised DNA molecules specific to the gene under investigation (oligonucleotide primers) are used to define the two ends of the region to be amplified, leading to the accumulation of a product of defined size which can later be resolved by gel electrophoresis and, if necessary, transferred to filter by Southern blotting for downstream analysis. PCR has been demonstrated to have many uses in the clinical environment, including: rapid and efficient amplification of bacterial or viral DNA gene sequences for detection of infection, blood typing, and identification of single base changes or other polymorphisms characteristic of genetic variation or a disease phenotype.26 While powerful in terms of detection, PCR methods are in general difficult to quantify with any real accuracy. Thus, while reverse transcriptase PCR (RT-PCR) is widely used to detect RNA in tissue or cell extracts, such data are often presented as “semi-quantitative”. To overcome these limitations, “real time” quantitative PCR was developed. This is based on the principle that quantitative detection of the PCR product at each round of the PCR cycle allows the investigator to plot a complete amplification curve for each reaction and thereby select the part of the PCR process where exponential amplification is being achieved. TaqMan is widely used for product detection. TaqMan uses a third oligonucleotide primer in the reaction mix which corresponds to sequence located between the two amplifying primers. This oligonucleotide carries a fluorescence tag at one end and a quenching tag at the other. During each PCR cycle the TaqMan probe anneals with the target sequence and is degraded as the DNA polymerase passes the annealed TaqMan probe thereby releasing the fluorescent tag which can be monitored. As well as offering considerably improved quantitation, automation of the detection process and the use of 96 well plate technology allows TaqMan real time PCR to be adapted to high throughput analysis which makes it especially useful for a clinical environment. ### Fishing with chips: the rise of microarrays Increased use of robotics in molecular biology has led to ways of planting far more genetic information in the form of cDNA or oligonucleotide sequences onto solid matrices than achievable before. Indeed, a complete cDNA library can now be gridded onto a single filter (called a gene array), rather than the dozen or so we might have expected to use five years ago. The move to miniaturisation is exemplified by the development of DNA microarrays, where DNAs or oligonucleotides are printed onto glass slides, allowing the simultaneous screening of many thousands of sequences in a single sample.27 This is an exciting development as in the last year a working draft of the human genome has been published which indicates that there are no more that 40 000 human genes making this a manageable figure. Microarray technology, variously referred to as microarrays, cDNA arrays, gene expression arrays, and gene chips, can usually detect changes in gene expression of twofold and above and can highlight single or global changes in gene expression arising from a pathological event or a developmental change. For example, Aitman and colleagues used microarrays as part of a strategy to identify genes implicated in human insulin resistance syndromes such as type 2 diabetes, combined hyperlipidaemia, and essential hypertension. Using a rat model for these diseases, the insulin resistant spontaneously hypertensive rat (SHR), a gene on rat chromosome 4 was identified encoding a fatty acid translocase which appears to underlie defects in fatty acid metabolism and hypertriglyceridaemia.28 DNA arrays work by the hybridisation of labelled DNA or RNA in solution to unique and specific DNA molecules gridded in an ordered pattern on a solid matrix such as a glass filter (microarray) or nylon membrane. When fluorescent labelled RNAs are used the output can be read as a false colour trace using computer imaging software. With several different fluorescent groups available, it is possible to label RNA pools representing different developmental points, or compare a pathological state with the normal one, and then hybridise these to a common array. The outputs can then be digitally superimposed and changes in the expression of individual genes recorded (fig 5). The gridded sequences are usually derived from the 3 non-coding regions of genes, cDNA and ESTs. There are now a number of commercially available platforms of “affinity matrix” systems available, including high density oligonucleotide arrays (GeneChips), representing up to 12 000 characterised genes and using glass as matrix and fluorescence for detection. In this system, in order to improve the noise-to-signal ratio and thereby improve RNA quantification, each gene is represented by 20 oligonucleotides (25–75 nt) of different sequence designed to hybridise to multiple regions of the same RNA molecule. Glass slide arrays are also available with PCR-amplified cDNAs (300–1000bp) attached. Nylon membranes, which use autoradiography for detection, are also used for the spotting of cDNA as probe but usually at a much lower density than that permissible on glass slides. Figure 5 Schematic illustrating the stages involved in using DNA microarrays, highlighting the differences between oligonucleotide and cDNA arrays. In both procedures, total RNA is extracted from control and test samples (for example, normal and pathological) and converted to cDNA. For oligonucleotide microarrays the cDNA is then used as a template to generate fluorescently labelled complementary RNA (cRNA) by in vitro transcription. Control and experimental cRNA pools, labelled with the same dye, are hybridised to separate microarrays. After washing the arrays are scanned in order to create a quantitative fluorescence image where the signal intensity at each “gene probe” is proportional to the number of molecules attached (and hence abundance of that sequence in the original sample). Data from control and experimental samples are then compared, resulting in a cluster diagram where each column represents a single experiment and each row a single gene. Ratios of gene expression are shown relative to the control sample (illustrated): the scale ranges from red (maximal expression) to black (minimal expression). With cDNA arrays, the synthesised cDNA is first amplified and labelled through multiple rounds of cDNA synthesis and in vitro transcription, the two pools being labelled with different dyes—for example, Cy3 (green) and Cy5 (red). Equal quantities of each sample are mixed and hybridised to the same cDNA microarray, which is then scanned at two different wavelengths giving the relative abundance of Cy3 and Cy5 transcripts in each sample. The images are then overlaid (illustrated) and spots visualised as ratios of red and green (in this illustration a red spot indicates a gene more highly expressed in normal than pathological tissue) with yellow indicating equal expression. For all DNA array analyses high quality RNA is essential, although the quantities required can range between 50 μg to 50 ng depending on the array used. Plentiful quantities of RNA can be easily extracted from in vitro cell culture systems or from large tissue samples. However, where cell numbers or tissue yield is limited—for example, if using clinical biopsies or cells obtained by laser capture microdissection—mRNA amplification is required. A major application for the use of arrays is the monitoring of gene expression (mRNA abundance). The collection of genes transcribed, sometimes referred to as the expression profile, within a given cell determines its cellular phenotype and function. Differences in gene expression profiles are responsible for both morphological and phenotypic differences and can be indicative of cellular responses to environmental stimuli. Knowing when, where, and to what extent a gene is expressed is pivotal to understanding the biological role of its encoded protein. A fundamental advantage of using arrays containing probes for thousands of different genes is that it provides a less biased view of cellular responses than a hypothesis based on the role of an isolated gene. DNA arrays should not be viewed as replacements for the established techniques used for gene expression quantification such as northern blots or RT-PCR. Indeed, it is essential that individual results of array experiments are verified and apparent differences more accurately quantified using, for example, real time RT-PCR. Although constantly improving, there are still several limitations to their widespread use that need to be overcome, such as the cost of microarrays (most are single use) and data handling software, the number of genes identified and gridded, and the limited number of species for which arrays are available. Many of the filter based arrays and microarrays available “off-the-shelf” contain a plethora of well characterised genes from a variety of cell types. For investigators only interested in a particular tissue or cell type, such arrays are of limited use. For this reason companies are now offering custom made arrays containing specific sets of genes. Needless to say, these are not cheap. The more experienced laboratories are now producing their own custom made cDNA arrays. For example, in order to obtain a global picture of gene expression in the failing heart, Barrans and colleagues amplified by PCR 10 368 redundant sequenced expressed sequences from a number of human heart and artery cDNA libraries representing 4777 sequence verified transcripts from known genes, human EST database entries, and novel cardiac ESTs.29 The importance of monitoring a large number of genes has been illustrated by the molecular classification of cancer. Through the analysis of samples obtained from individuals with and without acute leukaemia or diffuse large B cell lymphoma, it was apparent that reliable tumour classification predictions could not be made on the basis of any single gene, but that predictions based on the expression levels of 50 genes (selected from the more than 6000 monitored on the arrays) were highly accurate.30 ## SUMMARY Understanding the molecular mechanisms underlying how genes in the heart are expressed and how their expression may be modulated in response to external stimuli is now recognised as being an essential part of understanding how the heart develops, functions, and responds to pathological events. Such knowledge will lead to improved or new strategies for treating heart disease. Whereas the last decade has seen considerable advances in examining individual gene expression patterns, the new technologies now offer the ability to analyse global patterns of expression of many hundreds of genes. This will serve both as a methodology for identifying new genes involved in disease processes and in categorising global shifts in expression and hence general regulatory networks ## GLOSSARY Chromatin: DNA wrapped around a histone protein core at periodic intervals; the basic structure of the chromosome. Clone: (noun), descriptive term in recombinant DNA terminology for any individual DNA sequence obtained through the process of cloning. (verb), to isolate a particular gene sequence of interest (genomic DNA or cDNA copy of mRNA) and to insert it, for example, into an artificial plasmid vector which can be introduced into bacterial cells where it will replicate within the bacterial cell, allowing unlimited amounts of the isolated sequence to be propagated from cells harbouring the plasmid. Other types of vector allow propagation and expression in yeast or mammalian cells. Codon: Group of three nucleotides, whose sequence specifies a particular amino acid. The order in a continuous mRNA sequence reproduced from the gene thereby dictates the correct order of amino acids in the protein. Complementary DNA (cDNA): A DNA copy of an RNA sequence, produced by the action of the enzyme reverse transcriptase. Chromatin: DNA wrapped around a histone protein core at periodic intervals; the basic structure of the chromosome. Cloning: various meanings include (1) inserting a DNA fragment of interest into a vector molecule, thereby generating a recombinant DNA molecule, which allows one to propagate that DNA fragment in bacteria; (2) isolating a new organism from a single somatic cell (for example, “Dolly” the cloned sheep). Enhancer: DNA sequence capable of conferring a significant increase in gene transcription through its ability to bind cognate transcription factors. Enhancers are cis-acting but can function at distance from the gene promoter and are position and orientation independent. Exons: Regions of a gene that are represented in the final mRNA molecule. Most, but not all, exons of a gene code for the protein. GATA: Family of transcription factors utilising a zinc chelating protein “finger” structure to recognise and bind with high affinity to specific DNA sequences. Gene expression: The process by which a gene is transcribed in the nucleus to give mRNA (transcription), from which protein is later produced in the cytoplasm by translation. Homeobox: DNA binding motif comprised of three α helices, two of which are separated by a short, mobile peptide linker. Found in all homeodomain proteins, many of which are implicated in activating patterns of gene expression associated with patterning and other early development events. Homology: Similarity resulting from being derived from a common ancestral gene. Often used (incorrectly) to refer simply to sequence similarity. Hybridise: To use a (usually) radioactively labelled DNA or RNA single stranded molecule specific for one strand of a particular gene as a probe to find and bind to its immobilised, single stranded complement strand (usually presented within a pool of many unrelated sequences) in a liquid environment in which salt molarity and temperature direct the efficiency of the hybridisation process. Introns: Regions of a gene that lie between exons and, unlike exons, are excluded from the final mRNA by the process of mRNA splicing. MAP kinase: Mitogen activated protein kinase. Enzymes which modify proteins, especially transcription factors, by adding a phosphate group, resulting in altered activity. Three major MAPK subgroups have been identified: the extracellular signal regulated kinases (ERKs), the c-Jun N-terminal kinases/stress-activated protein kinases (JNK/SAPKs), and the p38 MAP kinases. Together with their upstream regulators (fig 2), MAP kinases represent one of the major signal systems used by eukaryotic cells to transduce extracellular signals into cellular responses. Messenger RNA (mRNA): Produced by the process of transcription, mRNA is exported to the cytoplasm where protein is produced from it at the ribosome (translation). Microarray: A solid matrix on which minute amounts of DNA sequences are imprinted robotically for analysis of differential gene expression using fluorescently labelled RNA probes. Northern blot: Method of analysing mRNA expression using a nylon or nitrocellulose blot on which are immobilised electrophoretically separated RNA mixtures. Nucleotide: Basic unit of nucleic acid consisting of one of four bases (adenine (A), guanine (G), cytosine (C) and thymine (T)—the four letter alphabet of DNA) coupled to sugar and phosphate groups. Oligonucleotide: Short (for example 18–24 nucleotides) chemically synthesised single stranded DNA molecules. Used in PCR reactions and, when radiolabelled, as probes for Southern blots. PCR: Polymerase chain reaction. Enzymatic method of synthesising multiple copies of a particular gene sequence by repeated cycles of thermal DNA denaturation and extension of gene specific oligonucleotide primers. PolyA tail: A tract of adenylic acid residues attached post-splicing to the 3 end of all messenger RNAs. Useful for purifying mature mRNA by hybridising to solid matrices bearing poly-T tracts, or for serving as template for annealing of polyT primers before cDNA synthesis by reverse transcriptase. Primer: Short single stranded DNA molecules, used to prime DNA synthesis from RNA (by reverse transcription) or DNA templates. Probe: Generic term for a DNA or RNA molecule used to detect the presence of specific genes or their RNA products, whether by northern blot or hybridisation in situ. Promoter: Region of DNA just upstream of the gene which is responsible for binding transcription factors and RNA polymerase in order to initiate gene transcription. Reporter gene: Generic term for a vector containing a biologically measurable marker (for example, the Escherichia coli β-galactosidase gene LacZ or the firefly luciferase gene). By inserting an isolated promoter sequence in front of the reporter gene, the activity of that promoter can be followed once the vector has been introduced into cultured cells or a transgenic animal. Reverse transcription: Process of making a cDNA copy of an RNA molecule; carried out by the retroviral enzyme reverse transcriptase. RNA splicing: The process of removing intron-derived RNA sequences from the primary transcript to give mRNA. Southern blot: Named after its inventor, a technique for transferring DNA to a solid membrane which can then be analysed by hybridisation with labelled probes specific to particular genes. TAD: Transcription activation domain. A region of a transcription factor that has been experimentally determined to be essential for contributing to high levels of gene transcription once brought in close proximity to RNA polymerase by virtue of the factor binding to DNA. Transgenic: Refers to the transference of foreign DNA into the germline of a host (transgenic) animal. Transcription: The production of a primary RNA transcript by RNA polymerase from a gene. Transcription factors: DNA-binding proteins which bind to gene promoters or enhancers and interact with RNA polymerase to alter the overall rate of gene transcription. Translation: The production of protein from a mRNA template at the ribosome. Western blot: Electrophoretic method of transferring proteins, separated by molecular weight on a polyacrylamide gel matrix, to a solid filter. Filters are then probed with a specific antibody to detect proteins under investigation. ### Myocardial molecular biology: key points • Transcription (RNA synthesis) plays a significant role in regulating gene expression • Interactions between transcription factors bound to DNA directs cardiac specific gene expression • RNA splicing increases the diversity of proteins that can be produced from a single gene • A variety of techniques are available for monitoring RNA and protein expression • Some techniques allow quantification—for example, real time polymerase chain reaction • Modern techniques are aimed at identifying global patterns of expression involved in disease (for example, gene chip analysis) rather than studying individual genes ## Acknowledgments PJRB is a Senior Research Fellow of the British Heart Foundation. We are grateful to our colleagues Dr Kim Dellow and Dr Pank Bhavsar for their support and comments during the preparation of this manuscript. View Abstract ## Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Much to my confusion the openSUSE source repository containing debug symbols is not enabled by default, though the repo metadata is present, Which means that, by default, debug packages will not be returned in your zypper search results. You can enable it so that the debug packages are displayed with zypper search like so,
# What is Charles' law formula? Jun 27, 2014 The physical law that the volume of a fixed mass of gas held at a constant pressure varies directly with the absolute temperature. This law states that the volume of a given amount of gas ,at a constant pressure gas varies with its temperature or simply we can say that the volume of gas changes with its temperature. At a higher temperature , a gas will take up more volume ( expands) , at a lower temperature, a gas will take up lesser volume or it contracts. Let us suppose that we have a given amount of gas enclose in a balloon, the temperature of the gas is ${T}_{1}$ (Kelvin) , and it takes up the volume ${V}_{1}$ (Liter). If the Temperature is changed to a new value called ${T}_{2}$ , then the volume changes to ${V}_{2}$. as per the law the ratio of Volume / temperature stays the same, if one increases , the other decreases but the ratio does not change. ${V}_{1}$ / ${T}_{1}$ = k ......(a) ${V}_{2}$ / ${T}_{2}$ = k ......(b) equating the two equations (a) and (b) ${V}_{1}$ / ${T}_{1}$ = ${V}_{2}$ /${T}_{2}$ or ${V}_{1}$ . ${T}_{2}$ = ${V}_{2}$ . ${T}_{1}$
# 9.8: Molecular Orbital Theory ##### Learning Objectives Make sure you thoroughly understand the following essential ideas • In what fundamental way does the molecular orbital model differ from the other models of chemical bonding that have been described in these lessons? • Explain how bonding and antibonding orbitals arise from atomic orbitals, and how they differ physically. • Describe the essential difference between a sigma and a pi molecular orbital. • Define bond order, and state its significance. • Construct a "molecular orbital diagram" of the kind shown in this lesson for a simple diatomic molecule, and indicate whether the molecule or its positive and negative ions should be stable. The molecular orbital model is by far the most productive of the various models of chemical bonding, and serves as the basis for most quantiative calculations, including those that lead to many of the computer-generated images that you have seen elsewhere in these units. In its full development, molecular orbital theory involves a lot of complicated mathematics, but the fundamental ideas behind it are quite easily understood, and this is all we will try to accomplish in this lesson. This is a big departure from the simple Lewis and VSEPR models that were based on the one-center orbitals of individual atoms. The more sophisticated hybridization model recognized that these orbitals will be modified by their interaction with other atoms. But all of these valence-bond models, as they are generally called, are very limited in their applicability and predictive power, because they fail to recognize that distribution of the pooled valence electrons is governed by the totality of positive centers. ## Molecular Orbitals Chemical bonding occurs when the net attractive forces between an electron and two nuclei exceeds the electrostatic repulsion between the two nuclei. For this to happen, the electron must be in a region of space which we call the binding region. Conversely, if the electron is off to one side, in an anti-binding region, it actually adds to the repulsion between the two nuclei and helps push them away. The easiest way of visualizing a molecular orbital is to start by picturing two isolated atoms and the electron orbitals that each would have separately. These are just the orbitals of the separate atoms, by themselves, which we already understand. We will then try to predict the manner in which these atomic orbitals interact as we gradually move the two atoms closer together. Finally, we will reach some point where the internuclear distance corresponds to that of the molecule we are studying. The corresponding orbitals will then be the molecular orbitals of our new molecule. ##### The hydrogen molecule ion: the simplest molecule To see how this works, we will consider the simplest possible molecule, $$\ce{H2^{+}}$$. This is the hydrogen molecule ion, which consists of two nuclei of charge +1, and a single electron shared between them. As two H nuclei move toward each other, the 1s atomic orbitals of the isolated atoms gradually merge into a new molecular orbital in which the greatest electron density falls between the two nuclei. Since this is just the location in which electrons can exert the most attractive force on the two nuclei simultaneously, this arrangement constitutes a bonding molecular orbital. Regarding it as a three- dimensional region of space, we see that it is symmetrical about the line of centers between the nuclei; in accord with our usual nomenclature, we refer to this as a σ (sigma) orbital. ## Bonding and Antibonding Molecular Orbitals There is one minor difficulty: we started with two orbitals (the 1s atomic orbitals), and ended up with only one orbital. Now according to the rules of quantum mechanics, orbitals cannot simply appear and disappear at our convenience. For one thing, this would raise the question of at just what internuclear distance do we suddenly change from having two orbitals, to having only one? It turns out that when orbitals interact, they are free to change their forms, but there must always be the same number. This is just another way of saying that there must always be the same number of possible allowed sets of electron quantum numbers. How can we find the missing orbital? To answer this question, we must go back to the wave-like character of orbitals that we developed in our earlier treatment of the hydrogen atom. You are probably aware that wave phenomena such as sound waves, light waves, or even ocean waves can combine or interact with one another in two ways: they can either reinforce each other, resulting in a stronger wave, or they can interfere with and partially destroy each other. A roughly similar thing occurs when the “matter waves” corresponding to the two separate hydrogen 1s orbitals interact; both in-phase and out-of-phase combinations are possible, and both occur. The in-phase, reinforcing interaction yields the bonding orbital that we just considered. The other, corresponding to out-of-phase combination of the two orbitals, gives rise to a molecular orbital that has its greatest electron probability in what is clearly the antibonding region of space. This second orbital is therefore called an antibonding orbital. When the two 1s wave functions combine out-of-phase, the regions of high electron probability do not merge. In fact, the orbitals act as if they actually repel each other. Notice particularly that there is a region of space exactly equidistant between the nuclei at which the probability of finding the electron is zero. This region is called a nodal surface, and is characteristic of antibonding orbitals. It should be clear that any electrons that find themselves in an antibonding orbital cannot possibly contribute to bond formation; in fact, they will actively oppose it. We see, then, that whenever two orbitals, originally on separate atoms, begin to interact as we push the two nuclei toward each other, these two atomic orbitals will gradually merge into a pair of molecular orbitals, one of which will have bonding character, while the other will be antibonding. In a more advanced treatment, it would be fairly easy to show that this result follows quite naturally from the wave-like nature of the combining orbitals. What is the difference between these two kinds of orbitals, as far as their potential energies are concerned? More precisely, which kind of orbital would enable an electron to be at a lower potential energy? Clearly, the potential energy decreases as the electron moves into a region that enables it to “see” the maximum amount of positive charge. In a simple diatomic molecule, this will be in the internuclear region— where the electron can be simultaneously close to two nuclei. The bonding orbital will therefore have the lower potential energy. ## Molecular Orbital Diagrams This scheme of bonding and antibonding orbitals is usually depicted by a molecular orbital diagram such as the one shown here for the dihydrogen ion H2+. Atomic valence electrons (shown in boxes on the left and right) fill the lower-energy molecular orbitals before the higher ones, just as is the case for atomic orbitals. Thus, the single electron in this simplest of all molecules goes into the bonding orbital, leaving the antibonding orbital empty. Since any orbital can hold a maximum of two electrons, the bonding orbital in H2+is only half-full. This single electron is nevertheless enough to lower the potential energy of one mole of hydrogen nuclei pairs by 270 kJ— quite enough to make them stick together and behave like a distinct molecular species. Although H2+ is stable in this energetic sense, it happens to be an extremely reactive molecule— so much so that it even reacts with itself, so these ions are not commonly encountered in everyday chemistry. ### Dihydrogen If one electron in the bonding orbital is conducive to bond formation, might two electrons be even better? We can arrange this by combining two hydrogen atoms-- two nuclei, and two electrons. Both electrons will enter the bonding orbital, as depicted in the Figure. We recall that one electron lowered the potential energy of the two nuclei by 270 kJ/mole, so we might expect two electrons to produce twice this much stabilization, or 540 kJ/mole. Bond order is defined as the difference between the number of electron pairs occupying bonding and nonbonding orbitals in the molecule. A bond order of unity corresponds to a conventional "single bond". Experimentally, one finds that it takes only 452 kJ to break apart a mole of hydrogen molecules. The reason the potential energy was not lowered by the full amount is that the presence of two electrons in the same orbital gives rise to a repulsion that acts against the stabilization. This is exactly the same effect we saw in comparing the ionization energies of the hydrogen and helium atoms. ### Dihelium With two electrons we are still ahead, so let’s try for three. The dihelium positive ion is a three-electron molecule. We can think of it as containing two helium nuclei and three electrons. This molecule is stable, but not as stable as dihydrogen; the energy required to break He2+ is 301 kJ/mole. The reason for this should be obvious; two electrons were accommodated in the bonding orbital, but the third electron must go into the next higher slot— which turns out to be the sigma antibonding orbital. The presence of an electron in this orbital, as we have seen, gives rise to a repulsive component which acts against, and partially cancels out, the attractive effect of the filled bonding orbital. Taking our building-up process one step further, we can look at the possibilities of combining to helium atoms to form dihelium. You should now be able to predict that He2 cannot be a stable molecule; the reason, of course, is that we now have four electrons— two in the bonding orbital, and two in the antibonding orbital. The one orbital almost exactly cancels out the effect of the other. Experimentally, the bond energy of dihelium is only .084 kJ/mol; this is not enough to hold the two atoms together in the presence of random thermal motion at ordinary temperatures, so dihelium dissociates as quickly as it is formed, and is therefore not a distinct chemical species. ### Diatomic molecules containing second-row atoms The four simplest molecules we have examined so far involve molecular orbitals that derived from two 1s atomic orbitals. If we wish to extend our model to larger atoms, we will have to contend with higher atomic orbitals as well. One greatly simplifying principle here is that only the valence-shell orbitals need to be considered. Inner atomic orbitals such as 1s are deep within the atom and well-shielded from the electric field of a neighboring nucleus, so that these orbitals largely retain their atomic character when bonds are formed. ### Dilithium For example, when lithium, whose configuration is 1s22s1, bonds with itself to form Li2, we can forget about the 1s atomic orbitals and consider only the σ bonding and antibonding orbitals. Since there are not enough electrons to populate the antibonding orbital, the attractive forces win out and we have a stable molecule. The bond energy of dilithium is 110 kJ/mole; notice that this value is less than half of the 270 kJ bond energy in dihydrogen, which also has two electrons in a bonding orbital. The reason, of course, is that the 2s orbital of Li is much farther from its nucleus than is the 1s orbital of H, and this is equally true for the corresponding molecular orbitals. It is a general rule, then, that the larger the parent atom, the less stable will be the corresponding diatomic molecule. ### Lithium hydride All the molecules we have considered thus far are homonuclear; they are made up of one kind of atom. As an example of a heteronuclear molecule, let’s take a look at a very simple example— lithium hydride. Lithium hydride is a stable, though highly reactive molecule. The diagram shows how the molecular orbitals in lithium hydride can be related to the atomic orbitals of the parent atoms. One thing that makes this diagram look different from the ones we have seen previously is that the parent atomic orbitals have widely differing energies; the greater nuclear charge of lithium reduces the energy of its 1s orbital to a value well below that of the 1s hydrogen orbital. There are two occupied atomic orbitals on the lithium atom, and only one on the hydrogen. With which of the lithium orbitals does the hydrogen 1s orbital interact? The lithium 1s orbital is the lowest-energy orbital on the diagram. Because this orbital is so small and retains its electrons so tightly, it does not contribute to bonding; we need consider only the 2s orbital of lithium which combines with the 1s orbital of hydrogen to form the usual pair of sigma bonding and antibonding orbitals. Of the four electrons in lithium and hydrogen, two are retained in the lithium 1s orbital, and the two remaining ones reside in the σ orbital that constitutes the Li–H covalent bond. The resulting molecule is 243 kJ/mole more stable than the parent atoms. As we might expect, the bond energy of the heteronuclear molecule is very close to the average of the energies of the corresponding homonuclear molecules. Actually, it turns out that the correct way to make this comparison is to take the geometric mean, rather than the arithmetic mean, of the two bond energies. The geometric mean is simply the square root of the product of the two energies. The geometric mean of the H2 and Li2 bond energies is 213 kJ/mole, so it appears that the lithium hydride molecule is 30 kJ/mole more stable than it “is supposed” to be. This is attributed to the fact that the electrons in the 2σ bonding orbital are not equally shared between the two nuclei; the orbital is skewed slightly so that the electrons are attracted somewhat more to the hydrogen atom. This bond polarity, which we considered in some detail near the beginning of our study of covalent bonding, arises from the greater electron-attracting power of hydrogen— a consequence of the very small size of this atom. The electrons can be at a lower potential energy if they are slightly closer to the hydrogen end of the lithium hydride molecule. It is worth pointing out, however, that the electrons are, on the average, also closer to the lithium nucleus, compared to where they would be in the 2s orbital of the isolated lithium atom. So it appears that everyone gains and no one loses here! ## $$\Sigma$$ and $$\pi$$ orbitals The molecules we have considered thus far are composed of atoms that have no more than four electrons each; our molecular orbitals have therefore been derived from s-type atomic orbitals only. If we wish to apply our model to molecules involving larger atoms, we must take a close look at the way in which p-type orbitals interact as well. Although two atomic p orbitals will be expected to split into bonding and antibonding orbitals just as before, it turns out that the extent of this splitting, and thus the relative energies of the resulting molecular orbitals, depend very much on the nature of the particular p orbital that is involved. You will recall that there are three possible p orbitals for any value of the principal quantum number. You should also recall that p orbitals are not spherical like s orbitals, but are elongated, and thus possess definite directional properties. The three p orbitals correspond to the three directions of Cartesian space, and are frequently designated px, py, and pz, to indicate the axis along which the orbital is aligned. Of course, in the free atom, where no coordinate system is defined, all directions are equivalent, and so are the p orbitals. But when the atom is near another atom, the electric field due to that other atom acts as a point of reference that defines a set of directions. The line of centers between the two nuclei is conventionally taken as the x axis. If this direction is represented horizontally on a sheet of paper, then the y axis is in the vertical direction and the z axis would be normal to the page. These directional differences lead to the formation of two different classes of molecular orbitals. The above figure shows how two px atomic orbitals interact. In many ways the resulting molecular orbitals are similar to what we got when s atomic orbitals combined; the bonding orbital has a large electron density in the region between the two nuclei, and thus corresponds to the lower potential energy. In the out-of-phase combination, most of the electron density is away from the internuclear region, and as before, there is a surface exactly halfway between the nuclei that corresponds to zero electron density. This is clearly an antibonding orbital— again, in general shape, very much like the kind we saw in hydrogen and similar molecules. Like the ones derived from s-atomic orbitals, these molecular orbitals are σ (sigma) orbitals. Sigma orbitals are cylindrically symmetric with respect to the line of centers of the nuclei; this means that if you could look down this line of centers, the electron density would be the same in all directions. orbitals, we get the bonding and antibonding pairs that we would expect, but the resulting molecular orbitals have a different symmetry: rather than being rotationally symmetric about the line of centers, these orbitals extend in both perpendicular directions from this line of centers. Orbitals having this more complicated symmetry are called π (pi) orbitals. There are two of them, πy and πz differing only in orientation, but otherwise completely equivalent. The different geometric properties of the π and σ orbitals causes the latter orbitals to split more than the π orbitals, so that the σ* antibonding orbital always has the highest energy. The σ bonding orbital can be either higher or lower than the π bonding orbitals, depending on the particular atom. ## Second-Row Diatomics If we combine the splitting schemes for the 2s and 2p orbitals, we can predict bond order in all of the diatomic molecules and ions composed of elements in the first complete row of the periodic table. Remember that only the valence orbitals of the atoms need be considered; as we saw in the cases of lithium hydride and dilithium, the inner orbitals remain tightly bound and retain their localized atomic character. ### Dicarbon Carbon has four outer-shell electrons, two 2s and two 2p. For two carbon atoms, we therefore have a total of eight electrons, which can be accommodated in the first four molecular orbitals. The lowest two are the 2s-derived bonding and antibonding pair, so the “first” four electrons make no net contribution to bonding. The other four electrons go into the pair of pibonding orbitals, and there are no more electrons for the antibonding orbitals— so we would expect the dicarbon molecule to be stable, and it is. (But being extremely reactive, it is known only in the gas phase.) You will recall that one pair of electrons shared between two atoms constitutes a “single” chemical bond; this is Lewis’ original definition of the covalent bond. In C2 there are two paris of electrons in the π bonding orbitals, so we have what amounts to a double bond here; in other words, the bond order in dicarbon is two. ### Dioxygen The electron configuration of oxygen is 1s22s22p4. In O2, therefore, we need to accommodate twelve valence electrons (six from each oxygen atom) in molecular orbitals. As you can see from the diagram, this places two electrons in antibonding orbitals. Each of these electrons occupies a separate π* orbital because this leads to less electron-electron repulsion (Hund's Rule). The bond energy of molecular oxygen is 498 kJ/mole. This is smaller than the 945 kJ bond energy of N2— not surprising, considering that oxygen has two electrons in an antibonding orbital, compared to nitrogen’s one. The two unpaired electrons of the dioxygen molecule give this substance an unusual and distinctive property: O2 is paramagnetic. The paramagnetism of oxygen can readily be demonstrated by pouring liquid O2 between the poles of a strong permanent magnet; the liquid stream is trapped by the field and fills up the space between the poles. Since molecular oxygen contains two electrons in an antibonding orbital, it might be possible to make the molecule more stable by removing one of these electrons, thus increasing the ratio of bonding to antibonding electrons in the molecule. Just as we would expect, and in accord with our model, O2+ has a bond energy higher than that of neutral dioxygen; removing the one electron actually gives us a more stable molecule. This constitutes a very good test of our model of bonding and antibonding orbitals. In the same way, adding an electron to O2 results in a weakening of the bond, as evidenced by the lower bond energy of O2. The bond energy in this ion is not known, but the length of the bond is greater, and this is indicative of a lower bond energy. These two dioxygen ions, by the way, are highly reactive and can be observed only in the gas phase. This page titled 9.8: Molecular Orbital Theory is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Stephen Lower via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.
Hyperbolic Secant Calculator This tool evaluates the hyperbolic secant of a number: sech(x). Enter the argument x below. x = Result: sech(x) = Definitions General The hyperbolic secant function is defined as: The graph of the hyperbolic secant function is shown in the figure below. Series All hyperbolic functions can be defined in an infinite series form. Hyperbolic secant function can be written as: The above series converges for . En denotes the n-th Euler number . Properties The derivative of the hyperbolic secant function is: The integral of the hyperbolic secant is given by:
Faculté des sciences Ambipolar diffusion length and photoconductivity measurements on ”midgap'' hydrogenated microcrystalline silicon In: Journal of Applied Physics, 1996, vol. 80, no. 9, p. 5111-5115 Hydrogenated microcrystalline silicon (µc-Si:H) deposited by VHF plasma-enhanced chemical vapor deposition has recently been proven to be fully stable, with respect to light-induced degradation, when adequately used in p-i-n solar cells. Stable solar cells efficiencies of 7.7% have been obtained with single-junction cells, using midgap'' microcrystalline i-layers, having... Plus Ajouter à la liste personnelle Exporter vers Summary Hydrogenated microcrystalline silicon (µc-Si:H) deposited by VHF plasma-enhanced chemical vapor deposition has recently been proven to be fully stable, with respect to light-induced degradation, when adequately used in p-i-n solar cells. Stable solar cells efficiencies of 7.7% have been obtained with single-junction cells, using midgap'' microcrystalline i-layers, having an optical gap of around 1 eV. In the present paper, the electronic transport properties of such microcrystalline layers are determined, by the steady-state photocarrier grating method (SSPG) and steady-state photoconductivity measurements, in a coplanar configuration. The conditions for the validity of the procedure for determining the ambipolar diffusion length, Lamb, from SSPG measurements (as previously theoretically derived in the context of amorphous silicon) are carefully re-examined and found to hold in these µc-Si:H layers, taking certain additional precautions. Otherwise, e.g., the prevalence of the lifetime'' regime (as opposed to the relaxation time'' regime) becomes questionable, in sharp contrast with the case of amorphous semiconductors, where this condition is almost never a problem. For the best layers measured so far, Lamb is about twice as high and the photoconductivity σphoto four times as high in µc-Si:H, when compared to device quality α-Si:H. Until now, the highest values of Lamb found by the authors for µc-Si:H layers are around 3×10–5 cm.
# The non-units in $\mathbb{R}[[x]]$ form a principal ideal. I'm having a bit of confusion regarding the ideal in $\mathbb{R}[[x]]$ consisting of non-units and I'm probably making some silly mistake somewhere. It's clear from order considerations that the units of this ring are the non-zero constants and so my intuition has suggested that the ideal of non-units is principal and generated by $x$. But, in this case, every element of $(x)$ is divisible by $x$. However, $1+x\in \mathbb{R}$ is not divisible by $x$ yet it is non-unit. Can someone point out where my error is? Thank you. - Are you sure the set of non-units is an ideal? $1+x$ and $1-x$ are non-units, but... –  wj32 Nov 2 '12 at 6:01 Whoa! Typographical error. I meant to write power series instead of polynomials. Correction soon to come. –  Alexander Sibelius Nov 2 '12 at 6:07 $1+x$ is a unit! –  nik Nov 2 '12 at 6:12 Alright, at this point it looks like a real power series is a unit if and only if it has a nonzero constant term. After I prove this, the fact that the non-units are principal will be trivial. Thanks again. –  Alexander Sibelius Nov 2 '12 at 6:19 The units are not the non-zero constants. For example, $$(1-x)^{-1}=1+x+x^2+\cdots.$$ The ideal of non-units is indeed generated by $x$. - Ah, I see what I've done now. Thanks. –  Alexander Sibelius Nov 2 '12 at 6:13 The units of this ring are the non-zero constants. $\;\;$ $\:1+x\:$ and $x$ are non-units, and $1$ is a unit. $1+x+(-x) \: = \: 1+0 \: = \: 1$ The set of non-units in $\mathbb{R}[x]$ is not closed under addition, therefore the set of non-units in $\mathbb{R}[x]$ is not an ideal. - $$(1+x)\sum_{n=0}^\infty (-1)^n x^n ?$$
# The gcrypto script¶ GC3Apps provide a script drive execution of multiple gnfs-cmd jobs each of them with a different parameter set. Allotogehter they form a single crypto simulation of a large parameter space. It uses the generic gc3libs.cmdline.SessionBasedScript framework. The purpose of gcrypto is to execute several concurrent runs of gnfs-cmd on a parameter set. These runs are performed in parallel using every available GC3Pie resource; you can of course control how many runs should be executed and select what output files you want from each one. ## Introduction¶ Like in a for-loop, the gcrypto driver script takes as input three mandatory arguments: 1. RANGE_START: initial value of the range (e.g., 800000000) 2. RANGE_END: final value of the range (e.g., 1200000000) 3. SLICE: extent of the range that will be examined by a single job (e.g., 1000) For example: # gcrypto 800000000 1200000000 1000 will produce 400000 jobs; the first job will perform calculations on the range 800000000 to 800000000+1000, the 2nd one will do the range 800001000 to 800002000, and so on. Inputfile archive location (e.g. lfc://lfc.smscg.ch/crypto/lacal/input.tgz) can be specified with the ‘-i’ option. Otherwise a default filename ‘input.tgz’ will be searched in current directory. Job progress is monitored and, when a job is done, output is retrieved back to submitting host in folders named: RANGE_START + (SLICE * ACTUAL_STEP) Where ACTUAL_STEP correspond to the position of the job in the overall execution. The gcrypto command keeps a record of jobs (submitted, executed and pending) in a session file (set name with the ‘-s’ option); at each invocation of the command, the status of all recorded jobs is updated, output from finished jobs is collected, and a summary table of all known jobs is printed. New jobs are added to the session if new input files are added to the command line. Options can specify a maximum number of jobs that should be in ‘SUBMITTED’ or ‘RUNNING’ state; gcrypto will delay submission of newly-created jobs so that this limit is never exceeded. The gcrypto execute several runs of gnfs-cmd on a parameter set, and collect the generated output. These runs are performed in parallel, up to a limit that can be configured with the -J command-line option. You can of course control how many runs should be executed and select what output files you want from each one. In more detail, gcrypto does the following: 1. Reads the session (specified on the command line with the --session option) and loads all stored jobs into memory. If the session directory does not exist, one will be created with empty contents. 2. Divide the initial parameter range, given in the command-line, into chunks taking the -J value as a reference. So from a coomand line argument like the following: $gcrypto 800000000 1200000000 1000 -J 200 gcrypto will generate an initial chunks of 200 jobs starting from the initial range 800000000 incrementing of 1000. All jobs will run gnfs-cmd on a specific parameter set (e.g. 800000000, 800001000, 800002000, …). gcrypto will keep constant the number of simulatenous jobs running retrieving those terminated and submitting new ones untill the whole parameter range has been computed. 3. Updates the state of all existing jobs, collects output from finished jobs, and submits new jobs generated in step 2. Finally, a summary table of all known jobs is printed. (To control the amount of printed information, see the -l command-line option in the Introduction to session-based scripts section.) 4. If the -C command-line option was given (see below), waits the specified amount of seconds, and then goes back to step 3. The program gcrypto exits when all jobs have run to completion, i.e., when the whole paramenter range has been computed. Execution can be interrupted at any time by pressing Ctrl+C. If the execution has been interrupted, it can be resumed at a later stage by calling gcrypto with exactly the same command-line options. gcrypto requires a number of default input files common to every submited job. This list of input files is automatically fetched by gcrypto from a default storage repository. Those files are: gnfs-lasieve6 M1019 M1019.st M1037 M1037.st M1051 M1051.st M1067 M1067.st M1069 M1069.st M1093 M1093.st M1109 M1109.st M1117 M1117.st M1123 M1123.st M1147 M1147.st M1171 M1171.st M8e_1200 M8e_1500 M8e_200 M8e_2000 M8e_2500 M8e_300 M8e_3000 M8e_400 M8e_4200 M8e_600 M8e_800 tdsievemt When gcrypto has to be executed with a different set of input files, an additional command line argument --input-files could be used to specify the locatin of a tar.gz archive containing the input files that gnfs-cmd will expect. Similarly, when a different version of gnfs-cmd command needs to be used, the command line argument --gnfs-cmd could be used to specify the location of the gnfs-cmd to be used. ## Command-line invocation of gcrypto¶ The gcrypto script is based on GC3Pie’s session-based script model; please read also the Introduction to session-based scripts section for an introduction to sessions and generic command-line options. A gcrypto command-line is constructed as follows: Like a for-loop, the gcrypto driver script takes as input three mandatory arguments: 1. RANGE_START: initial value of the range (e.g., 800000000) 2. RANGE_END: final value of the range (e.g., 1200000000) 3. SLICE: extent of the range that will be examined by a single job (e.g., 1000) Example 1. The following command-line invocation uses gcrypto to run gnfs-cmd on the parameter set ranging from 800000000 to 1200000000 with an increment of 1000. $ gcrypto 800000000 1200000000 1000 In this case gcrypto will use the default values for determine the chunks size from the default value of the -J option (default value is 50 simulatenous jobs). Example 2. $gcrypto --session SAMPLE_SESSION -c 4 -w 4 -m 8 800000000 1200000000 1000 In this example, job information is stored into session SAMPLE_SESSION (see the documentation of the --session option in Introduction to session-based scripts). The command above creates the jobs, submits them, and finally prints the following status report: Status of jobs in the 'SAMPLE_SESSION' session: (at 10:53:46, 02/28/12) NEW 0/50 (0.0%) RUNNING 0/50 (0.0%) STOPPED 0/50 (0.0%) SUBMITTED 50/50 (100.0%) TERMINATED 0/50 (0.0%) TERMINATING 0/50 (0.0%) total 50/50 (100.0%) Note that the status report counts the number of jobs in the session, not the total number of jobs that would correspond to the whole parameter range. (Feel free to report this as a bug.) Calling gcrypto over and over again will result in the same jobs being monitored; The -C option tells gcrypto to continue running until all jobs have finished running and the output files have been correctly retrieved. On successful completion, the command given in example 2. above, would print: Status of jobs in the 'SAMPLE_SESSION' session: (at 11:05:50, 02/28/12) NEW 0/400k (0.0%) RUNNING 0/400k (0.0%) STOPPED 0/400k (0.0%) SUBMITTED 0/400k (0.0%) TERMINATED 50/400k (100.0%) TERMINATING 0/400k (0.0%) ok 400k/400k (100.0%) total 400k/400k (100.0%) Each job will be named after the parameter range it has computed (e.g. 800001000, 800002000, … ) (you could see this by passing the -l option to gcrypto); each of these jobs will create an output directory named after the job. For each job, the set of output files is automatically retrieved and placed in the locations described below. ## Output files for gcrypto¶ Upon successful completion, the output directory of each gcrypto job contains: • a number of .tgz files each of them correspondin to a step within the execution of the gnfs-cmd command. • A log file named gcrypto.log containing both the stdout and the stderr of the gnfs-cmd execution. Note The number of .tgz files may depend on whether the execution of the gnfs-cmd command has completed or not (e.g. jobs may be killed by the batch system when exausting requested resources) ## Example usage¶ This section contains commented example sessions with gcrypto. ### Manage a set of jobs from start to end¶ In typical operation, one calls gcrypto with the -C option and lets it manage a set of jobs until completion. So, to compute a whole parameter range from 800000000 to 1200000000 with an increment of 1000, submitting 200 jobs simultaneously each of them requesting 4 computing cores, 8GB of memory and 4 hours of wall-clock time, one can use the following command-line invocation: $ gcrypto -s example -C 120 -J 200 -c 4 -w 4 -m 8 800000000 1200000000 1000 The -s example option tells gcrypto to store information about the computational jobs in the example.jobs directory. The -C 120 option tells gcrypto to update job state every 120 seconds; output from finished jobs is retrieved and new jobs are submitted at the same interval. The above command will start by printing a status report like the following: Status of jobs in the 'example.csv' session: SUBMITTED 1/1 (100.0%) It will continue printing an updated status report every 120 seconds until the requested parameter range has been computed. In GC3Pie terminology when a job is finished and its output has been successfully retrieved, the job is marked as TERMINATED: Status of jobs in the 'example.csv' session: TERMINATED 1/1 (100.0%) ## Using GC3Pie utilities¶ GC3Pie comes with a set of generic utilities that could be used as a complemet to the gcrypto command to better manage a entire session execution. ### gkill: cancel a running job¶ To cancel a running job, you can use the command gkill. For instance, to cancel job.16, you would type the following command into the terminal: gkill job.16 or: gkill -s example job.16 gkill could also be used to cancel jobs in a given state gkill -s example -l UNKNOWN Warning There’s no way to undo a cancel operation! Once you have issued a gkill command, the job is deleted and it cannot be resumed. (You can still re-submit it with gresub, though.) ### ginfo: accessing low-level details of a job¶ It is sometimes necessary, for debugging purposes, to print out all the details about a job; the ginfo command does just that: prints all the details that GC3Utils know about a single job. For instance, to print out detailed information about job.13 in session example, you would type ginfo -s example job.13 For a job in RUNNING or SUBMITTED state, only little information is known: basically, where the job is running, and when it was started: $ginfo -s example job.13 job.13 cores: 2 execution_targets: hera.wsl.ch log: SUBMITTED at Tue May 15 09:52:05 2012 Submitted to 'wsl' at Tue May 15 09:52:05 2012 RUNNING at Tue May 15 10:07:39 2012 lrms_jobid: gsiftp://hera.wsl.ch:2811/jobs/116613370683251353308673 lrms_jobname: LACAL_800001000 original_exitcode: -1 queue: smscg.q resource_name: wsl state_last_changed: 1337069259.18 stderr_filename: gcrypto.log stdout_filename: gcrypto.log timestamp: RUNNING: 1337069259.18 SUBMITTED: 1337068325.26 unknown_iteration: 0 used_cputime: 1380 used_memory: 3382706 If you omit the job number, information about all jobs in the session will be printed. Most of the output is only useful if you are familiar with GC3Utils inner working. Nonetheless, ginfo output is definitely something you should include in any report about a misbehaving job! For a finished job, the information is more complete and can include error messages in case the job has failed: $ ginfo -c -s example job.13 job.13 _arc0_state_last_checked: 1337069259.18 _exitcode: 0 _signal: None _state: TERMINATED cores: 2 execution_targets: hera.wsl.ch log: SUBMITTED at Tue May 15 09:52:04 2012 Submitted to 'wsl' at Tue May 15 09:52:04 2012 TERMINATING at Tue May 15 10:07:39 2012 TERMINATED at Tue May 15 10:07:43 2012 lrms_jobid: gsiftp://hera.wsl.ch:2811/jobs/11441337068324584585032 lrms_jobname: LACAL_800001000 original_exitcode: 0 queue: smscg.q resource_name: wsl state_last_changed: 1337069263.13 stderr_filename: gcrypto.log stdout_filename: gcrypto.log timestamp: SUBMITTED: 1337068324.87 TERMINATED: 1337069263.13 TERMINATING: 1337069259.18 unknown_iteration: 0 used_cputime: 360 used_memory: 3366977 used_walltime: 300 With option -v, ginfo output is even more verbose and complete, and includes information about the application itself, the input and output files, plus some backend-specific information: \$ ginfo -c -s example job.13 job.13 arguments: 800000800, 100, 2, input.tgz changed: False environment: executable: gnfs-cmd executables: gnfs-cmd execution: _arc0_state_last_checked: 1337069259.18 _exitcode: 0 _signal: None _state: TERMINATED cores: 2 execution_targets: hera.wsl.ch log: SUBMITTED at Tue May 15 09:52:04 2012 Submitted to 'wsl' at Tue May 15 09:52:04 2012 TERMINATING at Tue May 15 10:07:39 2012 TERMINATED at Tue May 15 10:07:43 2012 lrms_jobid: gsiftp://hera.wsl.ch:2811/jobs/11441337068324584585032 lrms_jobname: LACAL_800001000 original_exitcode: 0 queue: smscg.q resource_name: wsl state_last_changed: 1337069263.13 stderr_filename: gcrypto.log stdout_filename: gcrypto.log timestamp: SUBMITTED: 1337068324.87 TERMINATED: 1337069263.13 TERMINATING: 1337069259.18 unknown_iteration: 0 used_cputime: 360 used_memory: 3366977 used_walltime: 300 inputs: srm://dpm.lhep.unibe.ch/dpm/lhep.unibe.ch/home/crypto/gnfs-cmd_20120406: gnfs-cmd srm://dpm.lhep.unibe.ch/dpm/lhep.unibe.ch/home/crypto/lacal_input_files.tgz: input.tgz jobname: LACAL_800000900 join: True output_base_url: None output_dir: /data/crypto/results/example.out/8000001000 outputs: @output.list: file, , @output.list, None, None, None, None gcrypto.log: file, , gcrypto.log, None, None, None, None persistent_id: job.1698503 requested_architecture: x86_64 requested_cores: 2 requested_memory: 4 requested_walltime: 4 stderr: None stdin: None stdout: gcrypto.log tags: APPS/CRYPTO/LACAL-1.0
D. Tournament Countdown time limit per test 2 seconds memory limit per test 256 megabytes input standard input output standard output This is an interactive problem. There was a tournament consisting of $2^n$ contestants. The $1$-st contestant competed with the $2$-nd, the $3$-rd competed with the $4$-th, and so on. After that, the winner of the first match competed with the winner of second match, etc. The tournament ended when there was only one contestant left, who was declared the winner of the tournament. Such a tournament scheme is known as the single-elimination tournament. You don't know the results, but you want to find the winner of the tournament. In one query, you select two integers $a$ and $b$, which are the indices of two contestants. The jury will return $1$ if $a$ won more matches than $b$, $2$ if $b$ won more matches than $a$, or $0$ if their number of wins was equal. Find the winner in no more than $\left \lceil \frac{1}{3} \cdot 2^{n + 1} \right \rceil$ queries. Here $\lceil x \rceil$ denotes the value of $x$ rounded up to the nearest integer. Note that the tournament is long over, meaning that the results are fixed and do not depend on your queries. Input The first line contains a single integer $t$ ($1 \leq t \leq 2^{14}$) — the number of test cases. The only line of input contains a single integer $n$ ($1 \leq n \leq 17$). It is guaranteed that the sum of $2^n$ over all test cases does not exceed $2^{17}$. Interaction The interaction for each test case begins by reading the integer $n$. To make a query, output "? a b" ($1 \leq a, b \leq 2^n$) without quotes. Afterwards, you should read one single integer — the answer for your query. You can make at most $\left \lceil \frac{1}{3} \cdot 2^{n + 1} \right \rceil$ such queries in each test case. If you receive the integer $-1$ instead of an answer or a valid value of $n$, it means your program has made an invalid query, has exceed the limit of queries, or has given incorrect answer on the previous test case. Your program must terminate immediately to receive a Wrong Answer verdict. Otherwise you can get an arbitrary verdict because your solution will continue to read from a closed stream. When you are ready to give the final answer, output "! x" ($1 \leq x \leq 2^n$) without quotes — the winner of the tournament. Giving this answer does not count towards the limit of queries. After solving a test case, your program should move to the next one immediately. After solving all test cases, your program should be terminated immediately. After printing a query or the answer do not forget to output end of line and flush the output. Otherwise, you will get Idleness limit exceeded. To do this, use: • fflush(stdout) or cout.flush() in C++; • System.out.flush() in Java; • flush(output) in Pascal; • stdout.flush() in Python; • see documentation for other languages. Hacks To hack, use the following format. The first line contains an integer $t$ ($1 \leq t \leq 2^{14}$) — the number of test cases. The first line of each test case contains a single integer $n$ ($1 \leq n \leq 17$). The second line of each test case contains $2^n$ numbers on a line — the number of wins of each participant. There should be a sequence of matches that is consistent with the number of wins. The sum of $2^n$ should not exceed $2^{17}$. Example Input 1 3 2 0 2 Output ? 1 4 ? 1 6 ? 5 7 ! 7 Note The tournament in the first test case is shown below. The number of wins is $[1,0,0,2,0,1,3,0]$. In this example, the winner is the $7$-th contestant.
# SunPy io¶ This submodule contains two types of routines, the first reads (data, header) pairs from files in a way similar to FITS files. The other is special readers for files that are commonly used in solar physics. ## sunpy.io Package¶ File input and output functions ### Functions¶ read_file(filepath[, filetype]) Automatically determine the filetype and read the file. read_file_header(filepath[, filetype]) Reads the header from a given file. write_file(fname, data, header[, filetype]) Write a file from a data & header pair using one of the defined file types. ### Classes¶ FileHeader(*args, **kwargs) FileHeader is designed to provide a consistent interface to all other sunpy classes that expect a generic file. ### sunpy.io.fits Module¶ Notes FITS [1] FITS files allow comments to be attached to every value in the header. This is implemented in this module as a KEYCOMMENTS dictionary in the sunpy header. To add a comment to the file on write, add a comment to this dictionary with the same name as a key in the header (upcased). [2] Due to the way fits works with images the header dictionary may differ depending on whether is accessed before or after the fits[0].data is requested. If the header is read before the data then the original header will be returned. If the header is read after the data has been accessed then the data will have been scaled and a modified header reflecting these changes will be returned: BITPIX may differ and BSCALE and B_ZERO may be dropped in the modified version. [3] The verify(‘fix’) call attempts to handle violations of the FITS standard. For example, nan values will be converted to “nan” strings. Attempting to cast a pyfits header to a dictionary while it contains invalid header tags will result in an error so verifying it early on makes the header easier to work with later. References #### Functions¶ header_to_fits(header) Convert a header dict to a Header. read(filepath[, hdus, memmap]) Read a fits file get_header(afile) Read a fits file and return just the headers for all HDU’s. write(fname, data, header[, hdu_type]) Take a data header pair and write a FITS file. extract_waveunit(header) Attempt to read the wavelength unit from a given FITS header. ### sunpy.io.jp2 Module¶ #### Functions¶ read(filepath, **kwargs) Reads a JPEG2000 file get_header(filepath) Reads the header from the file write(fname, data, header) Place holder for required file writer ### sunpy.io.ana Module¶ Warning The reading and writing of ana file is not supported under Windows. The C extensions are not built on Windows. Notes ANA is a script that allows people to access compressed ana files. It accesses a C library, based on Michiel van Noort’s IDL DLM library ‘f0’ which contains a cleaned up version of the original anarw routines. Created by Tim van Werkhoven (t.i.m.vanwerkhoven@gmail.com) on 2009-02-11. Copyright (c) 2009–2011 Tim van Werkhoven. #### Functions¶ read(filename[, debug]) Loads an ANA file and returns the data and a header in a list of (data, header) tuples. get_header(filename[, debug]) Loads an ANA file and only return the header consisting of the dimensions, size (defined as the product of all dimensions times the size of the datatype, this not relying on actual filesize) and comments. write(filename, data[, comments, compress, …]) Saves a 2D numpy array as an ANA file and returns the bytes written or NULL read_genx(filename) solarsoft genx file reader
# Chapter 15 Testing parts of models require(mosaic) # mosaic operators and data used in this section The basic software for hypothesis testing on parts of models involves the familiar lm() and summary() operators for generating the regression report and the anova() operator for generating an ANOVA report on a model. ## 15.1 ANOVA reports The anova operator takes a model as an argument and produces the term-by term ANOVA report. To illustrate, consider this model of wages from the Current Population Survey data. Cps <- CPS85 # from mosaicData mod1 <- lm( wage ~ married + age + educ, data = Cps) anova(mod1) ## Analysis of Variance Table ## ## Response: wage ## Df Sum Sq Mean Sq F value Pr(>F) ## married 1 142.4 142.40 6.7404 0.009687 ** ## age 1 338.5 338.48 16.0215 7.156e-05 *** ## educ 1 2398.7 2398.72 113.5405 < 2.2e-16 *** ## Residuals 530 11197.1 21.13 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Note the small p-value on the married term: 0.0097. To change the order of the terms in the report, you can create a new model with the explanatory terms listed in a different order. For example, here’s the ANOVA on the same model, but with married last instead of first: mod2 <- lm( wage ~ age + educ + married, data = Cps) anova(mod2) ## Analysis of Variance Table ## ## Response: wage ## Df Sum Sq Mean Sq F value Pr(>F) ## age 1 440.8 440.84 20.8668 6.13e-06 *** ## educ 1 2402.7 2402.75 113.7310 < 2.2e-16 *** ## married 1 36.0 36.01 1.7046 0.1923 ## Residuals 530 11197.1 21.13 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Now the p-value on married is large. This suggests that much of the variation in wage that is associated with married can also be accounted for by age and educ instead. ## 15.2 Non-Parametric Statistics Consider the model of world-record swimming times plotted on page 116. It shows pretty clearly the interaction between year and sex. It’s easy to confirm that this interaction term is statistically significant: Swim <- SwimRecords # in mosaicData anova( lm( time ~ year * sex, data = Swim) ) ## Analysis of Variance Table ## ## Response: time ## Df Sum Sq Mean Sq F value Pr(>F) ## year 1 3578.6 3578.6 324.738 < 2.2e-16 *** ## sex 1 1484.2 1484.2 134.688 < 2.2e-16 *** ## year:sex 1 296.7 296.7 26.922 2.826e-06 *** ## Residuals 58 639.2 11.0 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 The p-value on the interaction term is very small: $$2.8 \times10^{-6}$$. To check whether this result might be influenced by the shape of the distribution of the time or year data, you can conduct a non-parametric test. Simply take the rank of each quantitative variable: mod <- lm( rank(time) ~ rank(year) * sex, data = Swim) anova(mod) ## Analysis of Variance Table ## ## Response: rank(time) ## Df Sum Sq Mean Sq F value Pr(>F) ## rank(year) 1 14320.5 14320.5 3755.7711 <2e-16 *** ## sex 1 5313.0 5313.0 1393.4135 <2e-16 *** ## rank(year):sex 1 0.9 0.9 0.2298 0.6335 ## Residuals 58 221.1 3.8 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 With the rank-transformed data, the p-value on the interaction term is much larger: no evidence for an interaction between year and sex. You can see this directly in a plot of the data after rank-transforming time: xyplot( rank(time) ~ year, groups = sex, data = Swim) The rank-transformed data suggest that women’s records are improving in about the same way as men’s. That is, new records are set by women at a rate similar to the rate at which men set them.
• YABIN SUN Articles written in Bulletin of Materials Science • High piezoelectric properties of 0.82(Bi$_{0.5}$Na$_{0.5}$)TiO$_3$–0.18(Bi$_{0.5}$K$_{0.5}$)TiO$_3$ lead-free ceramics modified by (Mn$_{1/3}$Nb$_{2/3}$)$^{4+}$ complex ions The complex ions (Mn$_{1/3}$Nb$_{2/3}$)$^{4+}$ doped 0.82BNT–0.18BKT (BNKT-xMN) ceramics were prepared by conventional solid-state sintering. The effects of the MN content on the structural and electrical properties of the BNKT-$x$MN ceramics were investigated. The grain size decreases sharply after doping MN. With the increase of the MN content, the phase structure changes from the rhombohedral and tetragonal phase to the tetragonal phase, then to the pseudo-cubic phase. The ferroelectric phase transforms to the relaxor phase. At critical phase (x = 0.03), the maximum positive bipolar strain and unipolar strain are 0.38 and 0.386%, respectively. The corresponding $d^*$$_{33}$ and $d_{33}$ are 767 pm V$^{–1}$ and 158 pC N$^{–1}$, respectively. Meanwhile, the dielectric constant gradually decreases with the increase of the MN content, which flattens the permittivity curves. The large piezoelectric responses are closely associated with the reversible relaxor ferroelectric phase transformation. • # Bulletin of Materials Science Volume 45, 2022 All articles Continuous Article Publishing mode • # Dr Shanti Swarup Bhatnagar for Science and Technology Posted on October 12, 2020 Prof. Subi Jacob George — Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bengaluru Chemical Sciences 2020
+0 0 66 1 Expand $$(\sqrt{2} + \sqrt{3} + \sqrt{6})^2$$ I think I combine some term, but I'm not sure. Jun 30, 2020 #1 +732 +1 $$(a + b + c)^2 = a^2 + b^2 + c^2 + 2ab + 2bc + 2ca$$ just plug in the values and solve! $$(\sqrt{2})^2+(\sqrt{3})^2+(\sqrt{6})^2+2(\sqrt{2}\cdot\sqrt{3})+2(\sqrt{3}\cdot\sqrt{6})+2(\sqrt{2}\cdot\sqrt{6})$$ $$=2+3+6+2\sqrt{6}+2\sqrt{18}+2\sqrt{12}$$ $$=11+2\sqrt{6}+2\sqrt{18}+2\sqrt{12}$$ $$=11+2\sqrt{6}+2\sqrt{9\cdot2}+2\sqrt{3\cdot4}$$ $$=\boxed{11+2\sqrt{6}+6\sqrt{2}+4\sqrt{3}}$$ Jun 30, 2020
# Citing specific slides of a presentation I encouter the following problem to which I simply can't find a solution. I'm working on a homework, and I want to cite the slides the professor gave us. So, if I type \cite[20]{presentationOne} I want [1, F. 20] as result and not [1, S. 20]. (F. for Folie, and S. for Seite, I'm from Germany). I'm citing books as well, therefore simply changing it for all citations isn't a solution. I use biblatex with biber backend. Hope anyone got a clean solution for this. Otherwise I would have to mention in the foreword what I mean with those citations. • You could use \cite[F.~20]{presentationOne} when citing the presentation. Do you want an automatic solution for certain entries? – lockstep Mar 8 '13 at 13:39 • For my work now, your comment solves my problem. But I think for future readers, with longer documents and more citations than mine, an automatic solution would be nice. So if you know one it would be nice. – Dave Mar 8 '13 at 13:51 • Do you have a special bibliography driver for presentations? – Marco Daniel Mar 8 '13 at 13:58 • @MarcoDaniel No, I don't have any. Would you recommend one? – Dave Mar 8 '13 at 14:27 • If you have a special driver like presentation you can setup the formation of the optional key by \DeclareFieldFormat[presentation]{postnote}{F~#1} or some other modification related to the driver. – Marco Daniel Mar 8 '13 at 14:34 There are different ways to achieve this: 1. You can just use something like \cite[F.~20]{reference}. Whenever the optional argument is more than just a number, then biblatex outputs it literally instead of putting a pagination string in front of it.¹ 2. As already mentioned in a comment to your question, you could define a separate bibliography driver presentation for presentations. Then you could just define a specific postnote style with \DeclareFieldFormat[presentation]{postnote}{F~#1} or something similar. This would some kind of a global solution. 3. For every item in your .bib file you can set the pagination = {page|column|line|verse|section|paragraph|none} field. biblatex evaluates this field and changes its formatting of the postnotes according to the value of the pagination field. 3.1 You could now just set the pagination field to none which will already prevent biblatex from printing a "p." before the number, but you will still have to care about setting the "F.~" at the right places. 3.2 You could abuse one of the default types (column, verse etc.) to use it in your case. In this case you must give your .bib entry for example a pagination = {verse} field and redefine the corresponding verse locale to fit your needs by \DefineBibliographyStrings{german}{verse = {Folie}, verses = {Folien}}. 3.3 I didn't have a look into the code of biblatex but I think it may be possible to copy and modify the code for the pagination command to add another value (something like slide) to the paginantion type list. On the long run, this would be the cleanest but also most invasive method to solve your problem. –– ¹ This is not exactly the whole story, as there are also the commands like \psq to announce "and following pages". But everything that differs from a number and those special commands is just printed by biblatex without further processing. Expanding on Benedikt Bauer's wonderful answer, it is actually not very hard to define one's own pagination style (point 3.3). biblatex treats the pagination field in a way very convenient for us: The pagination field takes a key, such as the standard keys page, column, line, verse, section, paragraph or none. biblatex reads the key and tries to put the bibstring named after the key before the page numbers; except, of course, for the case in which pagination is none, in that case no bibstring is inserted. So if we define two new bibliography strings slide and slides, we can then specify pagination = {slide} in the .bib entry and are good to go. \NewBibliographyString{slide,slides} \DefineBibliographyStrings{ngerman}{% } \DefineBibliographyStrings{english}{% } Unfortunately, \DefineBibliographyStrings, which can be used in the preamble, does not support short and long bibstrings; if you want those, you will have to define your own .lbx file inheriting all the other features, but adding slide and slides. \ProvidesFile{ngerman-slides.lbx}[2013/10/15 ngerman with slides] \InheritBibliographyExtras{ngerman} \NewBibliographyString{slide,slides} \DeclareBibliographyStrings{% inherit = {ngerman}, } This language definition can then be loaded via \DeclareLanguageMapping{ngerman}{ngerman-slides}. The MWE \documentclass[11pt]{scrartcl} \begin{filecontents}{\jobname.bib} @inbook{DahmenReusken:Interpolation, author = {Wolfgang Dahmen and Arnold Reusken}, title = {Interpolation}, chapter = {8}, booktitle = {Numerik für Ingenieure und Naturwissenschaftler}, date = {2007-11-21}, urldate = {2013-08-11}, pagination = {slide}, } \end{filecontents} \usepackage[T1]{fontenc} \usepackage[latin1]{inputenc} \usepackage{lmodern} \usepackage[ngerman]{babel} \usepackage[babel]{csquotes} \usepackage[backend=biber, style=authoryear]{biblatex} \NewBibliographyString{slide,slides} \DefineBibliographyStrings{ngerman}{% } \DefineBibliographyStrings{english}{%
## Divisor counting problems Last night my older son asked me for a little help understanding divisor counting problems. He was struggling a little bit with that type of problem at his math club. Luckily Art of Problem Solivng’s Introduction to Number Theory book has an entire section on divisor counting problems, so it wasn’t difficult at all to find some good problems. We worked through four of them. Divisor counting is one of my favorite topics – I remember learning about it in high school and being amazed that you could know the number of factors of an integer without listing all of them. That, all by itself, is a neat basic counting idea. Another reason that I like the topic is that it helps me sneak in a little arithmetic practice with the kids, and you’ll see why I like that practice in the first two videos. Here are the four problems: (1) Find the number of positive divisors of 999,999. (2) What is the sum of the three positive numbers less than 1,000 that have exactly 5 positive divisors? (3) If $n$ has two prime divisors and 9 total divisors, how many divisors does $n^2$ have? (4) How many divisors of 3,240 are (i) multiples of 3, and [separate question] (ii) how many are perfect squares?
## Introductory Algebra for College Students (7th Edition) The solutions of the given inequality are the numbers that greater than or equal to $-2$ but are less than or equal to $0$. To graph the solutions on a number line, plot solid dots at $x=-2$ and $x=0$ (alternatively, a bracket may be used instead of a solid dot). Then, shade the region between the two. (refer to the attached image in the answer portion above)
# Wedderburn's theorem through Morita theory Wedderburn's theorem Let $R$ be a ring and $M$ a simple, faithful module over $R$. Let $E = {_R\text{End}(M)}$ and assume that $M$ is finite dimensional over $E$. Then $R\cong {_E\text{End}(M)}$ Apparently this can be proven using a Morita context. If the following is a Morita context $(R,E,M,M^*,\mu,\tau)$, and if $\mu$ is surjective. I will get my result. (This is some result from Morita Theory). However this requires that $M$ be a $R,E$-bimodule. But I don't see how it is a right $E$-module. Any help? What I tried: I tried defining $m\cdot f:=f(m)$ but that doesn't work for the product. Ie, $$m(fg) = (fg)(m) = f(g(m))\\(mf)g = g(mf) = g(f(m))$$ Another result says that if $(R,E^{op},M,M^*,\mu,\tau)$ is Morita Context and if the context is strict, then $R\cong {_{E^{op}}\text{End}(M)}$ Since $_{E^{op}}\text{End}(M) \cong{_E\text{End}(M)}$, this only leaves me to show that the context is strict which I don't see how I can show. • I don't know how relevant this still is but have you tried finding a progenerator? That might be easier – M.v.Roozendaal Jan 27 '18 at 16:42
## Precalculus (6th Edition) Blitzer The difference between the two numbers is 8. If one number is represented by x, the other number can be expresses as $8-x$. The product of the numbers, $P\left( x \right)$, expressed in the form $P\left( x \right)=a{{x}^{2}}+bx+c$, is $P\left( x \right)={{x}^{2}}-8x$. The difference between the two number is 8. It is assumed that one number is x and the other number is y. From the statement it is concluded that: \begin{align} & x-y=8 \\ & -y=8-x \\ & y=x-8 \end{align} So, the other number is expressed as $8-x$. $P\left( x \right)$ is the product of the numbers; \begin{align} & P\left( x \right)=xy \\ & =x\left( x-8 \right) \\ & ={{x}^{2}}-8x \end{align} So, the product of $P\left( x \right)$ is expressed in the form $P\left( x \right)=a{{x}^{2}}+bx+c$ is ${{x}^{2}}+\left( -8 \right)x$
# Moment Generating Functions #### silver ##### New Member The discrete random variable X has probability function p(x)=4/(5^X+1) X=0, 1, 2,... Derive the MGF of X and use it to find E(X) and V(X). I have managed to get this far: Mx(t) = Σ(e^tX)(4/(5^X+1)) e^(tX) = 1 + tX + (t^2/2!)X^2 + (t^3/3!)X^3 + ... So Mx(t) = Σ(4/(5^X+1)) + tΣ(4/(5^X+1)) + (t^2/2!)Σ(4/(5^X+1)) + (t^3/3!)Σ(4/(5^X+1)) + ... But I have no idea where to go from here, any help would be much appreciated! #### BGM ##### TS Contributor First of all do you mean the p.m.f. is in the form of $$\Pr\{X = x\} = \frac {c} {5^x + 1}, x = 0, 1, 2, \ldots$$ Then from wolframalpha, http://www.wolframalpha.com/input/?i=sum+1/(5^x+++1),+x+=+0+to+inf the normalizing constant is not equal to 4, and the series itself does not has a nice closed form solution (just in terms of the digamma function). Is there anything wrong?
# Why is a program with only atomics in SC-DRF but not in HRF-direct? In the paper "Heterogeneous-Race-Free Memory Models" [1], the authors state that a program consisting only of atomic is race free in SC-DRF but it is not in HRF-direct. I am not able to understand why this is. Can anyone explain how this can happen? [1] Derek R. Hower et al., "Heterogeneous-Race-Free Memory Models". In Proceedings of ASPLOS '14, ACM, 2014. (PDF) Hopefully they didn't say explicitly say exactly "a program consisting only of atomics is race free in SC-DRF." That's incorrect. They do say that "[in] scoped synchronization ... it is possible to write a racey program that is composed entirely of atomics if those atomics do not use scopes correctly," [top of page 2], which is slightly different (and uses the word "racey" ambiguously, perhaps leading you to believe they meant the incorrect statement.) They perhaps should have instead said said that in scoped synchronization it is possible to write a non sequentially consistent program that is composed entirely of atomics if those atomics do not use scopes correctly. SC-DRF (roughly: the memory semantics for C++ and Java) divide memory locations into two classes, synchronization objects, (sometimes called atomics), and data objects. For the most part all operations on synchronization objects are guaranteed to be sequentially consistent.[1] (Not race free.) This is a constraint on the compiler, not the programmer. It says that if the programmer says that one thread writes atomic object a then writes atomic object b, all the threads will see the writes occur in this order. (The compiler is not allowed to reorder the accesses, and is required to insert the appropriate memory barriers and fences on machines that aren't sequentially consistent.) Sequentially consistent means that all threads on all processors agree on a total order of all memory operations. If one thread thinks that operation $x$ happened before operation $y$, all the threads think that operation $x$ happened before operation $y$. Sequentially consistent does not mean not racey and it does not mean deterministic. For example, two different threads could try to write the same synchronization variable at approximately the same time. These accesses will be racey. A sequentially consistent system will just guarantee that if one thread thinks that write $x$ happened before write $y$, all the threads will agree that write $x$ happened before write $y$. Sequential consistency is useful because you can use it to construct and reason about useful non-deterministic synchronization objects like mutexes and condition variables. SC-DRF then says that if your program is data race free, the operations on data objects will appear to be sequentially consistent. The compiler is permitted to reorder (or even sometimes eliminate) operations on data objects, and if the underlying machine is not sequentially consistent, the compiler is not required to insert memory barriers or fences. But the compiler is not allowed to reorder operations on data objects with respect to operations on synchronization objects. Data race free is not the same as race free. And sequentially consistent is not the same as race free. A particular program execution schedule is data race free if for any pair of data object accesses, $x$ and $y$, by two different threads (at least one of which is a write operation) we can use the (sequentially consistent) order of atomic object accesses to prove that either $x$ happened before $y$ or $y$ happened before $x$. If we can not do the proof then that execution schedule is data racey. A program is data race free if all possible execution schedules are data race free. A program is data racey if there is an execution schedule that is data racey. Being data race free is a constraint on the programmer, not the compiler. It says that if your program is data racey, then your program has no semantics. The compiler is allowed to do anything it wants, including halting the program, going into an infinite loop, or blowing up the computer. Oh dear. TL; DR! I haven't even mentioned SC-for-HRC yet! In SC-for-HRC the atomics need to specify what scope they belong to. Atomic accesses are guaranteed (by the compiler) to be sequentially consistent only within their own scope. All threads within the scope will agree about the order in which the atomic accesses occur, but threads in a different scope may not agree (and may not even be able to see the accesses at all, let alone their order.) For example it might be the case that all the GPU threads agree that thread 19 acquired mutex A then acquired mutex B, but all the CPU threads do not agree (or know) that thread 19 acquired mutex A or mutex B at all. SC-DRF does not have "scopes" so doesn't suffer this problem. [1] The exceptions to the sequentially consistent atomics have to do with explicitly using std::memory_order_relaxed. So don't do that. (And I'll ignore the exceptions in the rest of what I'm saying.) • Thank you for the succinct explanation of SC. This is the best I've read so far. I did not want it to end :) Could you please also explain what "race free" is then, as you say it is different from data race free and SC? – sanatana Aug 10 '14 at 22:06 • "Race free" means that both your data and your synchronization are "race free", which means that your synchronization must be deterministic. (A "race" is an ordering that is determined (resolved) by a non-deterministic process.) There are deterministic synchronization systems (e.g. groups.csail.mit.edu/commit/papers/09/asplos073-olszewski.pdf) but they are unusual, and this isn't one. – Wandering Logic Aug 11 '14 at 0:01 • Personally, I would prefer that nobody use the term "racey" or "race-free" with respect to synchronization operations. I'd prefer that they used "non-deterministic" and "deterministic" for the resolution of who acquires a mutex. And thus I'd prefer that the word "race" always be preceded by the word "data." – Wandering Logic Aug 11 '14 at 0:06 • You say "the compiler is not allowed to reorder operations on data objects with respect to operations on synchronization objects." - Isn't the (C++) compiler (and CPU) allowed to reorder per the implicit acquire/release characteristics of SeqCst operations ? (that is, reorder operations on data objects one-way with SeqCst stores and reorder them in opposite direction with SeqCst loads) – LWimsey Dec 29 '17 at 22:28 • @LWimsey Yes, I think you are correct. My answer was already too long, so I was imprecise with respect to acquire/release. – Wandering Logic Dec 30 '17 at 14:59
When we moved into our first house we inherited a half finished retaining wall. eventually we where kindly gifted some wood planks represented as a list of lengths wood_lengths to finish the retaining wall we had to cut the planks to get planks of all the lengths we require, so we had another list of lengths required_lengths. We want to use as few cuts as possible because we are lazy programmers. Task: write an algorithm to find the list of cuts you should make. A cut can be described as: wood_num is the index of the length of wood to cut from in wood_lengths cut_amount the length of wood to cut from this wood_num e.g. describes cutting a plank of length 10 from wood_lengths[0] test cases: Finding an optimal solution means generating all valid possibilities and taking the one with optimal output (with the smallest amount of cuts in this case). One strategy for trying all possibilities is to imagine: for required_lengths[0] we could take the cut from any length in wood_lengths for required_lengths[1] we could take the cut from any length in wood_lengths … This recursive algorithm would run generate a recursive call tree with len(wood_lengths) branches at each step and len(required_lengths) deep so it would take atleast $O( len(wood\_lengths) ^ {len(required\_lengths)} )$ time. There are some conditions allowing you to prune this recursive call tree: • You can’t take a cut of size N from a wood_length less than N. • Only consider non duplicate wood lengths (If you have planks A and B both of size N then you could take a cut from either, it would be equivalent) Whats wrong with this solution? Overlapping subsolutions: consider the call tree when we pass the following parameters Notice how wood_lengths=[4, 6, 8] required_lengths=[2] is computed twice, when inputs get large this can mean the algorithm spends a massive amount of time solving problems that it has already solved. If the lengths in wood_lengths (regardless of order) is the same and required_length_idx is the same then retaining_wall_recursive should return the same thing. One way to improve a recursive algorithm to not compute the same thing twice is called memoization (caching the function call), We could store a hashtable (a dict) mapping the given arguments to the output of the function. One way to implement this in python is with decorators, its quite nice because our caching logic is in one place. We can then use the decorator as follows: This runs a lot faster, but does use more memory, could we be better about our memory usage by smartly evicting keys? TODO: finish dynamic programming example Maximise reuse/Minimise waste What if there where multiple solutions with the fewest cuts, then how should we decide the best? That is first and foremost we want to make the fewest cuts Secondly we want to maximize reuse (we want nice long offcuts we can use for other things). How should we define reuse? Well we know we want to build another retaining wall which will have similar distances between posts (and thus similar required_lengths) As a heuristic we know we will only be able to use a leftover plank if its at least the smallest length in required_lengths This means we can define reuse as the amount of times you can cut the smallest required_length out of your remaining planks. Waste is how much reuse a solution doesn’t get (The total length of all the offcuts that can’t be reused). Another way of wording this is that we want to minimize waste.
# Macaulay Duration for coupon payment? Sam buys an eight-year, 5000 par bond with an annual coupon rate of 5%, paid annually. The bond sells for 5000. Let $d_1$ be the Macaulay duration just before the first coupon is paid. Let $d_2$ be the Macaulay duration just after the first coupon is paid. Calculate $\dfrac{d_1}{d_2}$. Solution: According to SOA solutions, "This solution employs the fact that when a coupon bond sells at par the duration equals the present value of an annuity-due. To be honest I don't know why is that so. But suppose that a bond is not selling for par value, how can I solve for the duration of each coupon payment? Should I stick with the definition $$D_{mac} = \dfrac{\sum_{t \in N} tv^tR_t}{\sum_{t \in N}v^tR_t}$$ where $N$ is the set of positive integers and $R_t$ is the payment at time $t?$ I try doing it for $d_0$ in the above problem, but I'm getting $0$ as $t=0$ using the definition. Any alternatives? Selling at par means that $i=r$, where $i$ is the yield to maturity and $r$ is the coupon rate. The Macaulay duration can be written as (see here) $$D=\frac{F(r(Ia)_{\overline{n}\rceil i}+nv^n)}{Fra_{\overline{n}\rceil i}+Fv^n}=\frac{r(Ia)_{\overline{n}\rceil i}+nv^n}{ra_{\overline{n}\rceil i}+v^n}=\frac{r(1+i){a}_{\overline{n}\rceil i}+(i-r)\,nv^n}{r+(i-r)v^n}$$ If $i=r$, we have $$D=(1+i)a_{\overline{n}|i}=\ddot a_{\overline{n}|i}$$ So, in your question, we have $$d_0=\ddot a_{\overline{\,8}|5\%},\quad d_1=d_0-1,\qquad d_2=\ddot a_{\overline{\,7}|5\%}$$ and $$\frac{d_1}{d_2}=\frac{\ddot a_{\overline{\,8}|5\%}-1}{\ddot a_{\overline{\,7}|5\%}}=\frac{1}{1+5\%}\approx 0.9524$$ where we used the following $$\frac{\ddot a_{\overline{n}|i}-1}{\ddot a_{\overline{n-1}|i}}=\frac{\frac{1-v^n}{d}-1}{\frac{1-v^{n-1}}{d}}=\frac{1-v^n-d}{1-v^{n-1}}=\frac{v-v^n}{1-v^{n-1}}=\frac{v(1-v^{n-1})}{1-v^{n-1}}=v=\frac{1}{1+i}$$ that is $$\ddot a_{\overline{n}|i}=1+v\,\ddot a_{\overline{n-1}|i}$$
# The Chimney Sweeper [2] What is the speaker's tone lines 1-2? In other words, what attitude toward the boy's plight does he seem to be showing?
## Sunday, March 04, 2012 ... ///// ### Selling your soul for a narrative: understanding the Gleick fraud by Eric Dennis The author holds a PhD in physics from UC Santa Barbara and is a Senior Fellow at the Center for Industrial Progress. The Kavli Institute for Theoretical Physics (KITP) at UC Santa Barbara (the Kohn Hall). Within the green movement, Peter Gleick is a renowned environmental scientist specializing in the negative impact of global warming. He was elected to the National Academy of Sciences, co-founded the Pacific Institute, served as chairman of a “task force on scientific ethics and integrity” in the American Geophysical Union, and received a prestigious MacArthur Fellowship (aka “genius grant”) in anticipation of exceptional achievement in his field of research. Peter Gleick is also the self-admitted perpetrator of a recent fraud. This fraud did not involve any aspect of his own research, but was purely ideological in nature, directed against the Heartland Institute, a think tank that funds conferences featuring the work of scientists who do not toe the line on catastrophic global warming. Gleick impersonated a Heartland board member in order to obtain confidential documents including the institute’s donor list. He proceeded to combine this material with a fabricated strategy memo, antagonistically mischaracterizing the institute’s intentions,  and send the package anonymously to media organizations for the purpose of outing the donors and undermining future contributions. Only after himself being outed as the source of these documents by the detective work of a non-catastrophist blog contributor, Gleick fessed up and thereby cemented his career self-sabotage. He also claimed, implausibly, not to have himself fabricated the strategy memo, which he said he had received in the mail from another anonymous source, who for some reason trusted him and apparently him alone to disseminate it to the media and whose writing style coincidentally mimics Gleick’s own stylistic idiosyncrasies. This story has quickly gone viral in the climate blogosphere, but a lesser discussed aspect remains: why in the world would Gleick expose himself to such potentially career-destroying consequences, not to accomplish some Michael Mannian coup in the world of academic climatology, but merely to see his ideological foes embarrassed in print about a matter unrelated to any particular scientific controversy? Clearly the answer involves Gleick’s own consuming belief in the righteousness of his cause. But particularly revealing is what aspect of his opponents’ case he sought to undermine. If he were convinced the issue of primary importance were the battle over scientific proof of the coming catastrophe, he would not waste his fraud on what would then appear to be a minor tactical skirmish. But he just might risk it all to take down Heartland if what he saw as primarily important were constructing a moral narrative about his enemies’ motivation and financial backing. In the mutli-decadal climate battle, organizations like Heartland have only recently erected a forum for scientists who do not embrace catastrophism, so that they may present their results and bypass what had been a media blackout. The initial, hockey-stick phase of the catastrophists’ response, while making use of tangential slurs about their opponents’ purportedly corrupt motivation, consisted largely in attempts at scientific counter-argument. Observing the ultimate failure of this approach, Gleick seems to have realized what the catastrophists real weapon is: the morality play of evil capitalists trashing the planet and hiring glib hacks to obfuscate the scientific evidence of what they’ve done. As long as this message is planted deep into the brain stems of the young, prickly details about erroneous principal component analysis in temperature reconstructions or missing feedback mechanisms in climate models can be smoothed over. Cobbling together plausible counter-arguments is tedious and subject to perpetual back-and-forth, while shaping the basic moral story by which people understand modern industrial capitalism and its relationship to human well-being—especially when their opponents offer no alternative to this story—is the rhetorical gift that keeps on giving. Previous article by Eric Dennis: You may be interested in other articles in the climate category (click). #### snail feedback (9) : I would be surprised if he is a fellow at the Center for Industrial Progress. Doesn't look like his type of organisations at all. I assure you that Eric Dennis is a senior fellow of the center (click to see the official page on the leadership). What has led you to your demonstrably wrong conclusion? My apology. Confused The Author with Gleick. The anti robot defenses are almost impossible to get past. Probably get a more intelligent comment from a robot than me. Lubos, From everything I see and read, I've concluded the following: -- the AGW alarmists are wimpy geeks who could never get a date or afford a nice car until Al Gore came along -- the AGW realists (scientists and bloggers) are a collection of really smart people with elephant sized bull shit detectors who know a scam when they see one. They're able to identify flaws in both the science and the logic behind the theory and destroy both in logical and scientific ways. -- the defenders of the Alarmists (Desmog as well as mainsteam media) are not very bright and can't come up either logical or scientific support for the geeks so end up just writing ad hominem criticisms of the critics which make them look even geekier and mean. Lemon from Toronto (as an aside one of the comment moderation words below my comment was: "sensable" All charitable foundations eventually end up in the hands of academia. MacArthur would be apalled. I didn't know him personally, but heard many stories while working as an actuarial consultant putting a value on the insurance company he founded. The alarmists are in a panic about Heartlans receiving big oil funding. So what, the CRU receives big oil funding. Very good post on the subject Eric. This is a religious phenomenon as far as I concerned. The kids in school are being indoctrinated into this religious movement and it makes them easy targets for political causes once they reach voting age. All the propaganda presented in schools about dying polar bears, dying seals, rising sea levels, on and on lays the foundation for a moral and righteous cause i.e. "love of mother earth" or "environmentalism". When the kids grow up and vote, politicians can bundle their proposed initiatives with a sprinkling of "polar bears" or "environment" and the voters equate the message with morality and "the right thing to do". It's imperative that we get honest and pure science back into the schools and remove all of the advocacy science. We need high quality scientists and engineers tomorrow and not indoctrinated religious followers of faux environmentalism. Very good post, Eric, which starts to get to the root of the motivation behind the fraud. I think it also yet another sign that the warmists are starting to panic as the current lack of warming falsifies their conjecture. The resulting loss of public support is proving difficult for them to deal with, hence this and other examples of irrational behaviour.
## Monday, May 21, 2018 ### Maker faire 2018 I was at Maker Faire 2018 yesterday, and it is fun as usual! ## Sunday, May 6, 2018 ### Python debug in Jupyter notebook This week, we will talk about how to debug in the jupyter notebook. From this week, I will also try to use Python3 as much as possible in my code. You can find the notebook at Qingkai's Github Let's start by showing you some examples: ### Activate debugger after we run the code The first example is after we have an error occurred, for example, we have a function that squares the input number, but: def square_number(x): sq = x**2 sq += x return sq square_number('10') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-2-e429f5113bd2> in <module>() ----> 1 square_number('10') <ipython-input-1-822cf3bc7b4e> in square_number(x) 1 def square_number(x): 2 ----> 3 sq = x**2 4 sq += x 5 TypeError: unsupported operand type(s) for ** or pow(): 'str' and 'int' After we have this error, we could activate the debugger by using the magic command - %debug, which will open an interactive debugger for you. You can type in commands in the debugger to get useful information. %debug ipdb> h Documented commands (type help <topic>): ======================================== EOF cl disable interact next psource rv unt a clear display j p q s until alias commands down jump pdef quit source up args condition enable l pdoc r step w b cont exit list pfile restart tbreak whatis break continue h ll pinfo return u where bt d help longlist pinfo2 retval unalias c debug ignore n pp run undisplay Miscellaneous help topics: ========================== exec pdb ipdb> p x '10' ipdb> type(x) <class 'str'> ipdb> p locals() {'x': '10'} ipdb> q Here, the magic command '%debug' activate the interactive debugger pdb in the notebook, and you can type to see the value of the variables after the error occurs, like the ones I typed in above. There are some most frequent commands you can type in the pdb, like: • n(ext) line and run this one • c(ontinue) running until next breakpoint • p(rint) print varibles • l(ist) where you are • 'Enter' Repeat the previous command • s(tep) Step into a subroutine • r(eturn) Return out of a subroutine • h(elp) h • q(uit) the debugger ### Activate debugger before run the code We could also turn on the debugger before we even run the code: %pdb on Automatic pdb calling has been turned ON square_number('10') --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-5-e429f5113bd2> in <module>() ----> 1 square_number('10') <ipython-input-1-822cf3bc7b4e> in square_number(x) 1 def square_number(x): 2 ----> 3 sq = x**2 4 sq += x 5 TypeError: unsupported operand type(s) for ** or pow(): 'str' and 'int' ipdb> p sq *** NameError: name 'sq' is not defined ipdb> p x '10' ipdb> c # let's turn off the debugger %pdb off Automatic pdb calling has been turned OFF Sometimes we want to add breakpoints to the code import pdb def square_number(x): sq = x**2 # we add a breakpoint here pdb.set_trace() sq += x return sq square_number(3) > <ipython-input-8-4d6192d84091>(8)square_number() -> sq += x (Pdb) l 3 sq = x**2 4 5 # we add a breakpoint here 6 pdb.set_trace() 7 8 -> sq += x 9 10 return sq [EOF] (Pdb) p x 3 (Pdb) p sq 9 (Pdb) c 12 ## Thursday, April 19, 2018 ### I gave my exit seminar today Today I had my exit seminar, which is equivalent to the PhD defense since at Berkeley, we don't have a defense like other places. I am one step forward to Dr. Kong now ^)^ It started with my advisor Richard to give an introduction of me, and I really enjoyed working with Richard in the last 7 years, I learned so much from him! Then my friend Kathryn and Felipe prepared a hilarious presentation to toast me, it showed my fun facts in the last 7 years. I then summarized my work at Berkeley into a 40 min talk, it is all about MyShake, the project I worked on during my PhD life here at Berkeley. 7 years work, when I looked back, I really accomplished a lot! I am really glad that my whole family came to my seminar, even though my parents are not English speakers, they always support me without any asking for returns! I am also really happy that my good colleague and friend Roman from DT also joined me to celebrate my graduation! I feel really lucky that I could come to Berkeley for a PhD, it is a life change event, that will change my life forever. I really appreciate all the people who helped me here and accompany me during the 7 years. How valuable it is for this great opportunity here. Now, I am really excited about the new life after my graduation (technically, I still need to submit my dissertation and get signatures from my committee members to graduate). At the same time, I am a little sad about finishing my PhD, it marks a milestone for me! I am so used to my role as a PhD student ^_^ I think there are a lot of interesting things to do in the future, just follow your heart and have fun in the work! ## Sunday, April 15, 2018 ### Python: Jenks Natural Breaks This week we will talk about the Jenks Natural Breaks, it is mostly useful for determining the map ranges. It finds the best way to split the data into ranges, for example, if we have 50 countries, 15 countries with 0 - 3 values, 20 countries with values from 5 - 10, and 15 countries with 15 - 20. Therefore, if we want to plot them on a map with different colors, the best way we are splitting the data is 0-3, 3-10, and 10-20. The Jenks Natural Breaks is an algorithm that will figure this out by grouping the similar values into a group. Let’s see the example below. I am using an existing package - jenkspy, to calculate the breaks. You can find the notebook on Qingkai’s Github. import jenkspy import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-poster') %matplotlib inline # Let's generate this 3 classes data data = np.concatenate([np.random.randint(0, 4, 15), \ np.random.randint(5, 11, 20), np.random.randint(15, 21, 15)]) # Let's calculate the breaks breaks = jenkspy.jenks_breaks(data, nb_class=3) breaks [0.0, 3.0, 10.0, 20.0] plt.figure(figsize = (10,8)) hist = plt.hist(data, bins=20, align='left', color='g') for b in breaks: plt.vlines(b, ymin=0, ymax = max(hist[0])) We could see the above figure that the breaks (black lines) are exactly what we expect! ## How it works? The method is an iterative process to repeatedly test different breaks in the dataset to determine which set of breaks has the smallest in-class variance. You can see the above figure that within each group/class, the variance is smallest. But note that if only minimize the in-class variance, if we maximize the out-class variance (that is variance between different groups), the breaks will fall into the middle of the gaps above figure (in this case, it will be 4.5, 12.5, but I didn’t try). ## Another example Let’s have fun and see what the breaks for a normal distribution. (I didn’t find the connection to 3 sigmas as I thought!). np.random.seed(1) normal = np.random.normal(loc=0.0, scale=1.0, size=500) breaks = jenkspy.jenks_breaks(normal, nb_class=5) breaks [-2.79308500014654, -1.3057269225577375, -0.39675352685597737, 0.386539145133091, 1.2932258825322618, 3.0308571123720305] plt.figure(figsize = (10,8)) hist = plt.hist(normal, bins=20, align='left', color='g') for b in breaks: plt.vlines(b, ymin=0, ymax = max(hist[0])) ## Monday, April 9, 2018 ### Pandas groupby example Pandas groupby function is really useful and powerful in many ways. This week, I am going to show some examples of using this groupby functions that I usually use in my analysis. import pandas as pd import matplotlib.pyplot as plt import numpy as np plt.style.use('seaborn-poster') %matplotlib inline Let's first create a DataFrame, that contains the name, score and round for some games. a = ['Qingkai', 'Ironman', 'Batman', 'Qingkai', 'Ironman', 'Qingkai', 'Batman'] b = [3., 4, 2., 4, 5, 1, 2] c = range(len(a)) d = [[x,y,z] for x,y,z in zip(a,b,c)] df = pd.DataFrame(d, columns=['name', 'score', 'round']) df Now I want to calculate the mean scores for different users across all the games and the standard deviations. It could be quite simple with Pandas. As I am showing below: df.groupby('name').mean() scoreround name Batman2.0000004.000000 Ironman4.5000002.500000 Qingkai2.6666672.666667 df.groupby('name').std() scoreround name Batman0.0000002.828427 Ironman0.7071072.121320 Qingkai1.5275252.516611 Or I can loop through the groupby object once to calculate them all. for ix, grp in df.groupby('name'): print('Name: %s, mean: %.1f, std: %.1f'%(ix, grp['score'].mean(), grp['score'].std())) Name: Batman, mean: 2.0, std: 0.0 Name: Ironman, mean: 4.5, std: 0.7 Name: Qingkai, mean: 2.7, std: 1.5 But also, we could do it with one liner using the agg function: df.groupby('name').agg({'score':['mean','std'],'round':'count'}) scoreround meanstdcount name Batman2.0000000.0000002 Ironman4.5000000.7071072 Qingkai2.6666671.5275253 Besides, you can also use some customized functions in the agg as well. For example, if we want to calculate the RMS value of the score, we could do the following: def cal_RMS(x): return np.sqrt(sum(x**2/len(x))) df.groupby('name').agg({'score':['mean',cal_RMS],'round':'count'}) scoreround meancal_RMScount name Batman2.0000002.0000002 Ironman4.5000004.5276932 Qingkai2.6666672.9439203
# [NTG-context] strange output in math display mode Alan BRASLAU alan.braslau at cea.fr Tue Dec 22 20:24:45 CET 2015 Wolfgang, Can you explain to us why it should be preferable for ConTeXt users to employ \frac12 rather than the native TeX construction {1\over 2}? I understand that the macro \frac does some additional trickery but the two constructions should *always* yield identical results (when keyed-in properly). Of course, Donald Knuth disagrees with \frac from the point of view of the aesthetics of the syntax. Alan On Mon, 21 Dec 2015 15:17:18 +0100 Wolfgang Schuster <schuster.wolfgang at gmail.com> wrote: > > However the command {1 \over 2} gives the correct result. > > > It's a spurious "\??mathstylecache" in math-ini.mkiv and I sent a fix > to the dev-list.
## Algebra 2 (1st Edition) $0.5$ The normal distribution is symmetric about the mean, thus $P(x\leq \overline{x})=0.5$
Pipelining increases ______ of the processor. 1. Throughput 2. Storage 3. Predictivity 4. Latency Answer (Detailed Solution Below) Option 1 : Throughput Instruction Pipeline MCQ Question 1 Detailed Solution Concept: Pipelining is an implementation technique whereby multiple instructions are overlapped in execution. It is like an assembly line. Explanation: In pipelining, each step operates parallel with other steps. It stores and executes instructions in an orderly manner. The main advantages of using pipeline are : • It increases the overall instruction throughput. • Pipeline is divided into stages and stages are connected to form a pipe-like structure. • We can execute multiple instructions simultaneously. • It makes the system reliable. • It increases the program speed. • It reduces the overall execution time but does not reduce the individual instruction time. A non-pipeline system takes 50ns to process a task. The same task can be processed in six-segment pipeline with a clockcycle of 10ns. Determine approximately the speedup ratio of the pipeline for 500 tasks. 1. 6 2. 4.95 3. 5.7 4. 5.5 Option 2 : 4.95 Instruction Pipeline MCQ Question 2 Detailed Solution Concept: Speed up factor is defined as the ratio of time required for non-pipelined execution to that of time received for pipelined execution. Data: time for non-pipelined execution per task = t= 50 ns time for pipelined execution per task =  t= 10 ns number of stages in the pipeline = k = 6 number of tasks = 500 Formula $$S = \frac{{{T_n}}}{{{T_p}}}$$ S = speed up factor Calculation: Time for non-pipelined = Tn =  50 × 500 Time for pipelined = Tp​ =  1 × 6 × 10 + (500 - 1) × 10 $$S = \frac{{T_n}}{{T_p}} = 4.95$$ A five-stage pipeline has stage delays of 150, 120, 150, 160 and 140 nanoseconds. The registers that are used between the pipeline stages have a delay of 5 nanoseconds each.The total time to execute 100 independent instructions on this pipeline, assuming there are no pipeline stalls, is ______ nanoseconds. Instruction Pipeline MCQ Question 3 Detailed Solution Data: Number of Instructions = n = 100 Number of stages = 5; Stage delay = 5 ns Calculation: Time taken by five stage pipeline processor of singel instruction = T = Max (150, 120, 150, 160,140)  + stages delay = 160 + 5 = 165 ns The time required to execute n instructions with pipeline = [k + (n – 1)]T = (5 + (100 - 1))×165 = 17160 ns The speed gained by an 'n' segment pipeline executing 'm' tasks is: 1. $$\frac {(n + m - 1)}{mn}$$ 2. $$\frac {mn}{(n + m - 1)}$$ 3. $$\frac {n+m}{(mn - 1)}$$ 4. $$\frac {n + m}{(mn + 1)}$$ Answer (Detailed Solution Below) Option 2 : $$\frac {mn}{(n + m - 1)}$$ Instruction Pipeline MCQ Question 4 Detailed Solution Data: number of instructions/tasks = m number of stage (segment) = n Non-pipeline Assume each stage take 1 unit of time Time taken (Twp)  = number of stage × number of instructions Twp  = n.m For pipeline Only 1st instruction takes n unit time then every (m - 1) instruction takes 1 unit time Time taken(Tp) = (n + m - 1) $$S = \frac{T_{wp}}{T_{p}} = \frac{nm}{n+m-1}$$ Therefore option 2 is correct Which one of the following is false about Pipelining? 1. Increases the CPU instruction throughput 2. Reduces the execution time of an individual instruction 3. Increases the program speed 4. 1 and 2 Answer (Detailed Solution Below) Option 2 : Reduces the execution time of an individual instruction Instruction Pipeline MCQ Question 5 Detailed Solution Concept: Pipelining is an implementation technique whereby multiple instructions are overlapped in execution. It is like an assembly line. Explanation: In pipelining, each step operates parallel with other steps. It stores and executes instructions in an orderly manner. The main advantages of using pipeline are : • It increases the overall instruction throughput. • Pipeline is divided into stages and stages are connected to form a pipe-like structure. • We can execute multiple instructions simultaneously. • It makes the system reliable. • It increases the program speed. • It reduces the overall execution time but does not reduce the individual instruction time. ​Therefore option 2 is the false statement about Pipelining Consider a 6-stage instruction pipeline, where all stages are perfectly balanced. Assume that there is no cycle-time overhead of pipelining. When an application is executing on this 6-stage pipeline, the speedup achieved with respect to non-pipelined execution if 25% of the instructions incur 2 pipeline stall cycles is ________. Instruction Pipeline MCQ Question 6 Detailed Solution Data: Instruction pipeline = 6 stage 25% of the instructions incur 2 pipeline stall cycles Formula: Speed up = $$\frac{{Execution\;time\left( {non\;pipeline} \right)}}{{Execution\;time\;\left( {pipeline} \right)}}$$ Time with pipeline  =1+ stall freqency×stall cycle Calculation: Time without pipeline Time with pipeline Speed up = $$\frac{{6}}{{1.5}} = 4$$ Considering equal processing time for each segment, speed-up S achieved by a K-segment instruction pipeline operating on straight sequence of N instructions is given by: 1. $$S=\frac{K(N-K)}{K+N-1}$$ 2. $$S=\frac{K+N}{KN}$$ 3. $$S=\frac{K(K+1)}{(K+1)N}$$ 4. $$S=\frac{KN}{K+N-1}$$ Answer (Detailed Solution Below) Option 4 : $$S=\frac{KN}{K+N-1}$$ Instruction Pipeline MCQ Question 7 Detailed Solution Data: number of instructions = N number of stage (segment) = K Non-pipeline Assume each stage take 1 unit of time Time taken (Twp)  = number of stage × number of instructions Twp  = K.N For pipeline Only 1st instruction takes K unit time then every (N - 1) instruction takes 1 unit time Time taken(Tp) = (K + N - 1) $$S = \frac{T_{wp}}{T_{p}} = \frac{KN}{K+N-1}$$ Therefore option 4 is correct Consider an instruction pipeline with four stages (S1, S2, S3 and S4) each with combinational circuit only. The pipeline registers are required between each stage and at the end of the last stage. Delays for the stages and for the pipeline registers are as given in the figure.What is the approximate speed up of the pipeline in steady state under ideal conditions when compared to the corresponding non-pipeline implementation? 1. 4.0 2. 2.5 3. 1.1 4. 3.0 Option 2 : 2.5 Instruction Pipeline MCQ Question 8 Detailed Solution The correct answer is option 2 Concept: The segments are separated by registers Ri that holds the intermediate results between the stages. Data: Stage delay and corresponding register delay given S1 = 5 , S2 = 6 , S3 = 11, S4 = 8, And corresponding register delay is 1 for each stage Number of stage = 4 Explanation: Time is taken to execute N instructions in non-pipelined implementation will be =(5+6+11+8)N = 30×N Clock period for pipelined implementation =max(5,6,11,8) + 1 = 12 ns Time is taken to execute N instructions in pipelined implementation will be = (4 + N-1)12 ≈ 12×N (N is very large) Speedup = $${30N\over12N }=2.5$$ On a non-pipelined sequential processor, a program segment, which is a part of the interrupt service routine, is given to transfer 500 bytes from an I/O device to memory.Initialize the address registerInitialize the count to 500LOOP: Load a byte from deviceStore in memory at address given by address registerIncrement the address registerDecrement the countIf count ! = 0 go to LOOPAssume that each statement in this program is equivalent to a machine instruction which takes one clock cycle to execute if it is a non-load/store instruction. The load-store instructions take two clock cycles to execute.The designer of the system also has an alternate approach of using the DMA controller to implement the same transfer. The DMA controller requires 20 clock cycles for initialization and other overheads. Each DMA transfer cycle takes two clock cycles to transfer one byte of data from the device to the memory.What is the approximate speedup when the DMA controller based design is used in place of the interrupt-driven program based input-output? 1. 3.4 2. 4.4 3. 5.1 4. 6.7 Option 1 : 3.4 Instruction Pipeline MCQ Question 9 Detailed Solution The correct answer is option 1. Calculation: Non-pipelined system require , $$(2+2+1+1+1) \times 500 \space cycles +1+1=3502 \space cycles$$ DMA clock need = $$20+2 \times 500$$ = 1020 cycles Speed up  = $$3502 \over 1020$$ Speed up  = 3.43 Hence the correct answer is 3.4. Consider an instruction pipeline with five stages without any branch prediction: Fetch Instruction (FI), Decode Instruction (DI), Fetch Operand (FO), Execute Instruction (EI) and Write Operand (WO). The stage delays for FI, DI, FO, EI and WO are 5 ns, 7 ns, 10 ns, 8 ns and 6 ns, respectively. There are intermediate storage buffers after each stage and the delay of each buffer is 1 ns. A program consisting of 12 instructions I1, I2, I3, …, I12 is executed in this pipelined processor.Instruction I4 is the only branch instruction and its branch target is I9. If the branch is taken during the execution of this program, the time (in ns) needed to complete the program is 1. 132 2. 165 3. 176 4. 328 Option 2 : 165 Instruction Pipeline MCQ Question 10 Detailed Solution The correct answer is "option 2". CONCEPT: A pipeline is a process where multiple instructions get overlapped during execution. The pipeline is divided into stages and these stages are connected with one another to improve the execution process The pipeline allows storing and executing instructions in an orderly process. In the instruction pipeline, a stream of instructions is executed by overlapping phases of the instruction cycle like fetch, decode, execute, etc. It reads an instruction from memory while previous instruction being executed in others segments of the pipeline. EXPLANATION: T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 I1 FI DI FO EI WO I2 FI DI FO EI WO I3 FI DI FO EI WO I4 FI DI FO EI WO I5 Stall I6 stall I7 stall I9 FI DI FO EI WO I10 FI DI FO EI WO I11 FI DI FO EI WO I12 FI DI FO EI WO Cycle time = max of all stages delay + buffer delay = max (5ns,7ns,10ns,8ns,6ns) + 1ns = 10ns + 1ns = 11ns Out of all the instructions from I1 to I12, I4 is the only branch instruction. When I4 takes branch, the control will jump to the target instruction I9. There will be 3 stalls in stages DI, FO & EI. So after 3 stalls, I9 will start its execution as it is a branch target. So, total number of clock cycles = 15 Since 1 clock cycle time = 11ns So, total time to complete the program = 15 × 11ns = 165 ns Hence, the correct answer is "option 2". Which of the following statements with respect to K-segment pipelining are true ?(A) Maximum speedup that a pipeline can provide is k theoretically.(B) It is impossible to achieve maximum speedup k in k-segment pipeline(C) All segments in pipeline take same time in computation.Choose the correct answer from the options given below: 1. (A) and (B) only 2. (B) and (C) only 3. (A) and (C) only 4. (A). (B) and (C) Answer (Detailed Solution Below) Option 1 : (A) and (B) only Instruction Pipeline MCQ Question 11 Detailed Solution The correct answer is option 1. Key Points Pipelining: Overlapping execution- parallelism improves performance. The aim of using pipelining is to use a single clock per instruction (CPI) which automatically improves performance. Statement A is true: $$S = {{tn} \over tp}$$ if we assume that the time it takes to process a task is the same in the pipeline and pipelined circuits, we will have tn = ktp. Including this assumption, the speedup reduces to $$S = {{​​ktp} \over tp}$$=k This shows that the theoretical maximum speedup that a pipeline can provide is k, where k is the number of segments in the pipeline. Statement B True: K × efficiency =speedup The speedup can not be equals to K till efficiency becomes 1 and in real-time environment efficiency can not become 1 of many reasons like the delay at intermediate buffers, different times taken by different segments to perform their sub-operations which cause all other segments to waste time while waiting for the next clock. Statement C False: Because different segments can take different times to complete their sub-operations. Hence the correct answer is (A) and (B) only. Consider the following processor (ns stands for nanoseconds). Assume that the pipeline registers have zero latency.P1: Four-stage pipeline with stage latencies 1 ns, 2 ns, 2 ns, 1 ns.P2: Four-stage pipeline with stage latencies 1 ns, 1.5 ns, 1.5 ns, 1.5 ns.P3: Five-stage pipeline with stage latencies 0.5 ns, 1 ns, 1 ns, 0.6 ns, 1 nsP4: Five-stage pipeline with stage latencies 0.5 ns, 0.5 ns, 1 ns, 1 ns, 1.1 nsWhich processor has the highest peak clock frequency? 1. P1 2. P2 3. P3 4. P4 Option 3 : P3 Instruction Pipeline MCQ Question 12 Detailed Solution Time taken by P1 = max(3 ns, 2 ns, 6 ns, 2 ns, 1ns) = 2 ns ∴ Frequency of P1 = $$\frac{1}{{2\;ns}} = 0.5\;GHz$$ Time taken by P2 = max(1 ns, 1.5 ns, 1.5 ns, 1.5 ) = 1.5 ns ∴ Frequency of P2 = $$\frac{1}{{1.5\;ns}} = 0.67\;GHz$$ Time taken by P3 = max(0.5 ns, 1 ns, 1 ns, 0.6 ns, 1 ns) = 1 ns ∴ Frequency of P3 = $$\frac{1}{{1\;ns}} = 1\;GHz$$ Time taken by P4 = max(0.5 ns, 0.5 ns, 1 ns, 1 ns, 1.1 ns) = 1.1 ns ∴ Frequency of P4 = $$\frac{1}{{1.1\;ns}} = 0.901\;GHz$$ ∴ Frequency of P3  is highest. A certain processor deploys a single-level cache. The cache block size is 8 words and the word size is 4 bytes. The memory system uses a 60-MHz clock. To service a cache miss, the memory controller first takes 1 cycle to accept the starting address of the block, it then takes 3 cycles to fetch all the eight words of the block, and finally transmits the words of the requested block at the rate of 1 word per cycle. The maximum bandwidth for the memory system when the program running on the processor issues a series of read operations is ________× 106 bytes/sec. Instruction Pipeline MCQ Question 13 Detailed Solution Data: 1 word = 4 bytes Cache block size: 8 word Frequency = 60 MHz clock $$1\;cycle = \frac{1}{{60\;M\;Hz}} = \frac{{{{10}^{ - 6}}}}{{60}}\;s$$ In case of a miss, To accept the starting address = 1 cycle To fetch all the eight words of the block = 3 cycle To transmit the words of the requested block = 1 cycle/word Formula: $$Bandwidth = \frac{{data\;tranfer\;}}{{Total\;cycles\;}}$$ Calculation: In case of a cache miss, 8 words need to be transmitted ∴ Total cycles = 1 + 3 + 1× 8 = 12 cycle Data transfer for a block = 8 word = 8 × 4 bytes = 32 bytes $$Bandwidth = \frac{{32\;byte}}{{12\;cycle\;}}$$ $$Bandwidth = \frac{{32\;byte\;}}{{\begin{array}{*{20}{c}} {12 × \frac{{{{10}^{ - 6}}}}{{60}}\;}\\ \; \end{array}sec\;}}$$ Bandwidth = 160 × 106 bytes/sec Important points: 1 M = 106 SI unit of Hz: s-1 A 5-stage pipelined processor has Instruction Fetch (IF), Instruction Decode (ID), Operand Fetch (OF), Perform Operation (PO) and Write Operand (WO) stages. The IF, ID, OF and WO stages take 1 clock cycle each for any instruction. The PO stage takes 1 clock cycle for ADD and SUB instructions, 3 clock cycles for MUL instruction, and 6 clock cycles for DIV instruction respectively. Operand forwarding is used in the pipeline. What is the number of clock cycles needed to execute the following sequence of instructions? Instruction Meaning of instruction I0 : MUL R2, R0, R1 R2← R0 * R1 I1 : DIV R5, R3, R4 R5 ← R3 / R4 I2 : ADD R2, R5, R2 R2 ← R5 + R2 I3 : SUB R5, R2, R6 R5 ← R2 – R6 1. 13 2. 15 3. 17 4. 19 Option 2 : 15 Instruction Pipeline MCQ Question 14 Detailed Solution Key Points It is given that there is operand forwarding. In the case of operand forwarding, the updated value from the previous instruction's PO stage is forwarded to the present instruction's PO stage. Here there's a RAW dependency between I1-I2 for R5 and between I2-I3 for R2. These dependencies are resolved by using operand forwarding as shown in the below table. The total number of clock cycles needed is 15. CLK IF ID OF PO WO 1 I0 2 I1 I0 3 I2 I1 I0 4 I3 I2 I1 I0 5 I3 I2 I1 I0 6 I3 I2 I1 I0 7 I3 I2 I1 I0 8 I3 I2 I1 9 I3 I2 I1 10 I3 I2 I1 11 I3 I2 I1 12 I3 I2 I1 13 I3 I2 I1 14 I3 I2 15 I3 clock cycles=15. Hence the correct answer is 15. Consider a pipelined processor with 5 stages, Instruction Fetch (IF), Instruction Decode (ID), Execute (EX), Memory Access (MEM), and Write Back (WB). Each stage of the pipeline, except the EX stage, takes one cycle. Assume that the ID stage merely decodes the instruction and the register read is performed in the EX stage. The EX stage takes one cycle for ADD instruction and two cycles for MUL instruction. Ignore pipeline register latencies.Consider the following sequence of 8 instructions:ADD, MUL, ADD, MUL, ADD, MUL, ADD, MULAssume that every MUL instruction is data-dependent on the ADD instruction just before it and every ADD instruction (except the first ADD) is data-dependent on the MUL instruction just before it. The speedup is defined as follows:$$Speedup = \frac{{Execution{\:}time{\:}without{\:}operand{\:}forwarding}}{{Execution{\:}time{\:}with{\:}operand{\:}forwarding}}$$The Speedup achieved in executing the given instruction sequence on the pipelined processor (rounded to 2 decimal places) is _______ Instruction Pipeline MCQ Question 15 Detailed Solution Answer: 1.87 to 1.88 Explanation : Arrow represents dependency. Without Operand forwarding: • Total 30 clock cycles are required. With Operand Forwarding: WB I1 I2 I3 I4 I5 I6 I7 I8 MEM I1 I2 I3 I4 I5 I6 I7 I8 EX I1 I2 I2 I3 I4 I4 I5 I6 I6 I7 I8 I8 ID I1 I2 - - I3 - - - I4 - - I5 - - - I6 - - I7 - - - I8 - - IF I1 I2 I3 - - I4 I5 I6 I7 I8 • • • Total 30 clock cycles are required. With Operand Forwarding: WB I1 I2 I3 I4 I5 I6 I7 I8 MEM I1 I2 I3 I4 I5 I6 I7 I8 EX I1 I2 I2 I3 I4 I4 I5 I6 I6 I7 I8 I8 ID I1 I2 I3 - I4 I5 - I6 I7 - I8 IF I1 I2 I3 I4 - I5 I6 - I7 I8 - 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Total 16 Clock cycles are required. • $$Speedup = \frac{{Execution{\:}time{\:}without{\:}operand{\:}forwarding}}{{Execution{\:}time{\:}with{\:}operand{\:}forwarding}}$$$$\frac{{30}}{{16}} = 1.875$$ Match List-I with List-II and select the correct answer using the code given below the lists: List-I List-II  a. Pipelined ALU  i. RISC  b. Simpler compiler  ii. CISC  c. Separate data and      instruction caches  iii. Mixed      RISC-CISC  d. Lesser cycles per      instruction 1. a - iii, b - ii, c - iii, d - i 2. a - i, b - ii, c - iii, d - iii 3. a - iii, b - iii, c - ii, d - i 4. a - iii, b - iii, c - iii, d - i Answer (Detailed Solution Below) Option 1 : a - iii, b - ii, c - iii, d - i Instruction Pipeline MCQ Question 16 Detailed Solution Concept: RISC [Reduced Instruction set computing] CISC [Complex instruction set computing] More no. of register require Less no. of register compare to RISC Lesser addressing mode present Higher addressing mode present Lesser cycle per instruction required Higher cycle per instruction required Instruction size is fix Instruction size is variable Hardwired control unit used μ programmed control unit is used Less no. of instruction required More no. of instruction required Note: Pipelined ALU & separate date and instruction caches present in both i.e. RISC & CISC. Which of the following instruction processing activity of the CPU can be pipelined?1. Instruction encoding2. Operand loading3. Operand storing 1. 1 and 2 only 2. 2 and 3 only 3. 1 and 3 only 4. 1, 2 and Answer (Detailed Solution Below) Option 2 : 2 and 3 only Instruction Pipeline MCQ Question 17 Detailed Solution Concept: Pipeline: It is the process of arrangement of Hardware elements of the CPU such that its overall performances in increased. Simultaneous execution of more than an instruction takes place in the pipeline processor. → Pipeline stages: 1) Instruction fetch 2) Instruction Decode 3) Instruction execute 4) Memory Access 5) Write back Execution time in the pipeline: Epipeline = (K + n - 1) Tclk K → segment n → no. of task Explanation: Operand loading and operand storing are independent operations, hence these two can be pipelined. 1. Encoding of instruction must include opcode, operands, and address information. 2. Encoding represents the entire instruction as a binary value so it can not be pipelined. Option (b) is the correct choice. Fill in the blank in context of software system architecture:In __________ architecture, the processing of the data in a system is organized so that each processing component is discrete and carries out one type of data transformation. The data flows from one component to another for processing. 1. Pipe and filter 2. Repository 3. Layered 4. Client-Server Answer (Detailed Solution Below) Option 1 : Pipe and filter Instruction Pipeline MCQ Question 18 Detailed Solution The processing of the data in a system is organized so that each processing component (filter) is discrete and carries out one type of data transformation. The data flow (as in a pipe) from one component to another for processing. It is commonly used in data processing applications (both batch- and transaction-based) where inputs are processed in separate stages to generate related outputs. It is easy to understand and supports transformation reuse. Workflow style matches the structure of many business processes. Evolution by adding transformations is straightforward. It can be implemented as either a sequential or concurrent system. To load a byte of data parallelly into a shift register with a synchronous load, there must be______ 1. One clock pulse 2. One clock pulse for each 1 in the data 3. Eight clock pulses 4. One clock pulse for each 0 in the data Answer (Detailed Solution Below) Option 1 : One clock pulse Instruction Pipeline MCQ Question 19 Detailed Solution Concept: • To parallel load a byte of data into a shift register with a synchronous load, there must be one clock pulse is needed. • Sequential device loads the data present on its inputs and then shifts it to its output once every clock cycle • Shift register supports parallel loading and it allows loading several bits at one clock pulse. So option 1 is the correct answer. An instruction pipeline has five stages, namely, instruction fetch (IF), instruction decode and register fetch (ID/RF), instruction execution (EX), memory access (MEM), and register writeback(WB) with stage latencies 1 ns, 2.2 ns, 2 ns, 1 ns, and 0.75 ns, respectively (ns stands for nanoseconds). To gain in terms of frequency, the designers have decided to split the ID/RF stage into three stages (ID, RF1, RF2) each of latency 2.2/3 ns. Also, the EX stage is split into two stages(EX1, EX2) each of latency 1 ns. The new design has a total of eight pipeline stages. A program has 20% branch instructions which execute in the EX stage and produce the next instruction pointer at the end of the EX stage in the old design and at the end of the EX2 stage in the new design. The IF stage stalls after fetching a branch instruction until the next instruction pointer is computed. All instructions other than the branch instruction have an average CPI of one in both the designs. The execution times of this program on the old and the new design are P and Q nanoseconds, respectively. The value of P/Q is __________. Instruction Pipeline MCQ Question 20 Detailed Solution Concept: Execution time= cycle time (branch instructions x CPI + unbranch instructions x CPI) CPI (clock per instruction) = No of stalls + 1 Explanation: Cycle time: Old design cycle time = Max (1 ns, 2.2 ns, 2 ns, 1 ns, and 0.75 ns) =2.2 ns New design cycle time == Max (1 ns, 0.73 ns, 0.73 ns, 0.73 ns, 1 ns, 1 ns, 1 ns and 0.75 ns) =1 ns CPI: Unbranch instructions CPI = 1 Branch instructions CPI in old design: No of stalls = 2 so CPI = 3 Branch instructions CPI in new design: No of stalls = 5 so CPI = 6 Instructions: Branch instructions $$= \frac{{20}}{{100}} = \frac{1}{5}$$ unbranch instructions $$= 1 - \frac{1}{5} = \frac{4}{5}$$ Calculation: Execution time of old design P $$= 2.2{\rm{\;ns}}\left( {{\rm{\;}}\frac{1}{5} \times 3 + \frac{4}{5} \times 1} \right) = \frac{7}{5} \times 2.2{\rm{\;ns}}$$ Execution time of old design Q $$= 1{\rm{\;ns}}\left( {{\rm{\;}}\frac{1}{5} \times 6 + \frac{4}{5} \times 1} \right) = \frac{{10}}{5} \times 1{\rm{\;ns}} = 2{\rm{\;ns}}$$ $$\frac{{\rm{P}}}{{\rm{Q}}} = \frac{{2.2\; \times \;7}}{{5\; \times \;2}} = \frac{{15.4}}{{10}} = 1.54$$
# Ideal Gas Equation and Absolute Temperature The notion of absolute temperature is suggested by observations in the behavior of gases with change in temperature. In particular, the volume of a fixed mass of gas at constant pressure increases linearly with temperature t as V = A (t - To) ------(i) The significant thing about this is that To is independent of the gas but depends only on the temperature scale chosen. For the Celsius scale To = -273.15 o C. This suggests that we define a new temperature T given by T = t + 273.15 -------(ii) So that equation (i) can be put in the simple form V = AT This is known as Charles' Law. We have a similar relation for pressure versus temperature at constant volume (P α T). These relations and the Boyle's law, PV = const. (at constant temperature) are contained, as we know, in the ideal gas equation PV = μ RT. Real gases obey these relations only approximately, the approximation being better for low pressures and high temperatures. The Charles' Law suggests the physical interpretation of the temperature T, introduced above. As T ® 0, the volume of an ideal gas goes to zero. T = 0 thus represents the absolute lowest temperature possible, since below T = 0, V would be negative, which is unphysical. For this reason T is known as absolute temperature. Of course, since ideal gas is a hypothetical entity and real gases do not follow Charles' Law for all T, this is only a suggestive argument for the existence of absolute zero of temperature where every substance in nature has the least possible molecular activity. The unit of the absolute temperature scale has the same size as the unit of the Celsius scale: the two scales only differ in their origin. The absolute temperature T in Equation (ii) is called the Kelvin temperature, denoted by K. The absolute zero of temperature is thus 0 K or -273.15 o C. We should note that the Kelvin scale is not the only absolute temperature scale possible. We can have different absolute temperature scales by choosing different sizes of the unit of the scale. The zero of all such absolute scales will, however, be the same, namely the lowest possible temperature in nature. We mentioned earlier that two fixed points are needed to define a temperature scale. In modern thermometry, the triple point of water is chosen to be one of the fixed points. The triple point of water is the state where the solid, liquid and vapour phases of water co-exist in equilibrium. It is characterized by a unique temperature and pressure. This is why it is preferred over the conventional fixed points (melting point of ice and boiling point of water), which depend on pressure. To construct the absolute temperature scale, we take the triple point of water as one fixed point. On this scale, the absolute zero may be regarded as the other fixed point! We need to assign some numbers for temperatures of these two fixed points. The lowest temperature, the absolute zero, has T = 0. What value of absolute temperature shall we assign to the triple point of water? The value that we choose will determine the size of the unit of the absolute temperature scale. The triple point of water on the Celsius scale is 0.01 o C. From the Kelvin, the absolute temperature that we assign to the triple point is 273.16 K. To summarize, the Celsius temperature t c is related to the absolute temperature on the Kelvin scale (T) by t = T - 273.15; T = 273.16 K (triple point of water). The next task is to see how to determine the temperature T for different bodies in a way that is as independent of the particulars of the measuring device as possible. Heat energy causes changes in temperature, volume and pressure of gases. Boyle's Law Sir Robert Boyle stated that at constant temperature, the pressure of a given mass of gas is inversely proportional to its volume. Let P be the pressure and V the volume of a gas at a constant temperature T, then (at constant temperature) or PV = a constant
Volume 414 - 41st International Conference on High Energy physics (ICHEP2022) - Poster Session D-meson average production analysis as a function of multiplicity in pp collisions at $\sqrt{s}$ = 13 TeV with ALICE at the LHC M. Giacalone Full text: pdf Pre-published on: October 24, 2022 Published on: Abstract Production measurements of heavy-flavour hadrons in pp collisions provide a stringent test to pQCD calculations. Analysing their production as a function of charged-particle multiplicity allows us to study multi-parton interactions, which are expected to have a relevant role in charged particle production at high energy at the LHC. Moreover, the comparison with theoretical models allows investigating the contribution of colour reconnection in hadronization mechanisms of heavy quarks into hadrons. In this report, the production measurements in pp collisions at $\sqrt{s}$ = 13 TeV of D mesons (D$^0$, D$^+$, D*$^+$) as well as their dependence on multiplicity are presented. The results are compared with similar studies for other particle species, with D-meson measurements performed at $\sqrt{s}$ = 7 TeV, and with predictions from Monte Carlo event generators. DOI: https://doi.org/10.22323/1.414.0983 How to cite Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete. Open Access
# Tag Info 4 The Lagrangian formalism treats $x$ and $\dot{x}$ as independent variables. In particular, you cannot write $\frac{\mathrm{d}}{\mathrm{d}t}x$ because $x$ is not dependent on time. What is dependent on time is a particular trajectory $x(t)$ that is the solution to the equations of motion (the Euler-Lagrange equations). Prior to solving the equations of ... 4 Multiple classical solutions to Euler-Lagrange equations with pertinent/well-posed boundary conditions (such solutions are sometimes called instantons) are a common phenomenon in physics, cf. e.g. this related Phys.SE post and links therein. In optics, it is well-known that already e.g. two mirrors can create multiple classical paths. 4 Actually, the extra path is not irrelevant. If you put a light bulb at A and a $4\pi$ detector (this means $4\pi$ steradian coverage, i.e. it detects incoming light in any direction) at B, the detector will see light along both paths: direct, and bounced off the mirror, which is exactly the result you got from Fermat's Principle. If you want to exclude the ... 3 If you know the propagator, ie. $\langle x'|e^{itH}|x\rangle\,,$ then you could differentiate with respect to time at $t=0$ to get $\langle x'|H|x\rangle\,.$ From this we have, using the resolution of the identity, $H|x\rangle=\int_{-\infty}^\infty dx'\, |x'\rangle\langle x'|H|x\rangle\,,$ from which we have $V(x)|x\rangle=\int_{-\infty}^\infty \, ... 2 Here we assume that OP's question asks about$\phi^4$-theory in 1+1D, where the lagrangian density reads $$\tag{1} {\cal L}~=~\frac{1}{2}\dot\phi^2 -{\cal U}, \qquad {\cal U}~:=~ \frac{1}{2} \phi^{\prime 2} + {\cal V},\qquad \phi \in C^1(\mathbb{R}^2),$$ where the$\phi^4$-potential density $$\tag{2} {\cal V}(\phi)~\propto~(\phi^2-v^2)^2~ \geq~ 0$$ ... 2 The violation of gauge invariance by this term is the "only" reason why it's never written down – as long as we define the word "only" to include all other reasons that may be shown to be "physically equivalent" to gauge symmetry. Gauge symmetry is extremely important and its violation would make a similar theory inconsistent, especially at the quantum ... 2 You have a few different questions here, so let's try to go through them one by one. When we make the chiral symmetry local, have we introduced a gauge symmetry, or some analogue of a gauge symmetry? When you make the chiral symmetry local you introduce a gauge symmetry. The terms "gauge symmetry" and "local symmetry" are two different ways of saying the ... 2 You are confusing two definitions - closed system and conservation of energy. I'll clear them up for you. In classical dynamics a closed system is one where no force external to the system acts. In a closed system, the total energy, total momentum and total angular momentum must be conserved. This follows from Noether's theorem. If a has no interaction with ... 2 If one considers a dynamic system, which (from left to right) consists of a spring with constant k1, a mass m, a damper with constant c and the other spring with constant k2, all connected together, respectively If I understand your setup correctly, the damper is connected between the mass and the 2nd spring. Denote the extension of spring 1 with ... 2 The propagators themselves are not indicative for the form of the Lagrangian. They only provide information regarding the nature of the field - e.g. scalar / fermion / vector boson, etc (gravity metric?). Things that allude what the Lagrangian looks like are vertices / interactions. As a simple example: if you have a theory of field$\phi$with a 4-prong ... 2 The definition of the (integral of the) functional derivative (at least a definition that's good enough for physics level rigor) is the difference of the functional evaluated on a path$x(t)$plus an arbitrary variation$\epsilon(t)$and the functional evaluated on the path, to leading order in$\epsilon$. In other words ... 2 Draw an arrow to represent a vector, with its length representing the vector magnitude. Draw a coordinate system and get the components of the vector. Now draw another coordinate basis, rotated with respect to the first, and get the components with respect to the new basis. The length of the arrow is the same in both systems - i.e length is invariant - ... 1 Here is one way to imagine a variation. You have a path$\vec y(t)=(y_1(t), ..., y_n(t)),$where$y_i(t) = x_i(q_1(t),q_2(t),\dots,q_{n-k}(t),t).$So the idea is that for every time$t$you have some$q$s and they (together with the$t$) give you all your$y_i$hence give you your$\vec y.$Seems basic but the idea is that given a$t$you have an$\vec y.$... 1 Comments to the question (v3): Note that the gamma matrices are covariantly conserved$^1$$$\nabla_{\mu}\gamma^c~=~\omega_{\mu}{}^c{}_b\gamma^b+\frac{1}{4}\omega_{\mu}{}^{ab}[\gamma_{ab},\gamma^c] ~=~0,\qquad \nabla_{\mu}\gamma^{\nu}~=~0,\tag{1}$$ cf. e.g. Ref. 1. Consider the vector current $$J^{\mu}~:=~\bar{\psi}\gamma^{\mu}\psi,\tag{2}$$ where$\psi$... 1 The thing is that when you write the Lagrangian, you don't know the particle's trajectory yet. If you had a specified function$x(t)$, then of course$\dot{x}\$ is not independent. But if you only know the particle's position at a given time, its velocity can be anything, because you are free to set position and velocity as initial conditions how you ... Only top voted, non community-wiki answers of a minimum length are eligible
Proof of Kepler’s First Law. TOUCH this image to discover its story. 5/5 Marcus Sheen. From wikipedia:. Does a roller coaster have an engine? Does it need an engine? Explain. A strange definition of perfect. Applied Project in Sec. When the train goes over such a hill, it, and its riders, briefly undergo free fall. A 2 kg rock is dropped from a height of 20 m. Instead of the simulation we had to design a poster and put in the group roles, the parabolas equations, and 3 real life examples of parabolas seen in every day places. With Tenor, maker of GIF Keyboard, add popular Roller Coaster animated GIFs to your conversations. Create a unique parabola in the pattern f(x) = (x − a)(x − b). roller coaster. the vertex is (2 , 8) this porabola has a maximum becuase it opens up so the maximum. You decide to connect these two straight stretches y =L1(x) and y=L2(x) by a parabola y=f(x)=ax²+bx+c where x and f(x) are measured in feet. Newer roller coasters are designed by computer, and the designer can throw in all sorts of twists and whistles in order to provide an exciting ride. Notice that the parabolas are symmeric to the Y axis. The silver miners’ stocks have had a roller-coaster ride of a year, getting sucked into March’s stock panic before skyrocketing out in a massive upleg. The size of the hill, the loop diameter or the material which makes the roller coasters–all these are important things which determine the thrill of your ride. Roller coaster enthusiasts exploit these differences and compare notes as to the better places to sit on each ride to maximize the hang times, g-forces and ride experiences. This is why we measure the forces exerted by coasters in units of G's, where 1 G represents the force the rider experiences while sitting stationary in the earth's gravitational field. You decide to connect these two straight streches y = L1(x) and y = L2(x) with part of a parabola y = f(x) = ax2 + bx + c, where x and f(x) are measured in feet. My students will build catapults to experience parabolas in the real world. Scary Roller Coasters Crazy Roller Coaster Cool Coasters Six Flags Universal Studios Great America Amusement Park Rides Cedar Point Parks. Before I put pieces together, I need a name/theme for the ride. The coaster's 310-foot first hill gave it the title of the "World's Tallest Full-Circuit Roller Coaster" from its opening in May 2000 until Steel Dragon 2000. Design the track just the right shape - parabola - and the car will stay in contact with the track but exert zero normal force. 2) Parabola Card Sort - In this activity, students are given graphs of parabolas. Roller coasters without differential equations 837 r rt() x y z e N e T e B Figure 1. Also consider curves that look like parabolas, are parabolas (assume the curves are smooth). The points chosen for the simulation are $$A=(0, 0)$$ and $$B=(3, -2)$$. In this WebQuest, students will create a thrilling, safe roller coaster using linear and quadratic equations. rollercoasterclipartimagesfree | Public domain vectors - download vector images, svg cut files and graphics free of copyright. Design A Roller Coaster-An Example Suppose we are asked to design a simple ascent and drop roller coaster. Roller Coaster Ride* A roller coaster at XTRA Fun Rides goes into an underground tunnel during the first descent of the ride. Weightlessness is truly a unique feeling. Among her many numerous flight experiences she flew the KC-135 in a roller coaster pattern of parabolas to create 25-second periods of zero gravity. Its first hill is 207 feet (63 m) with a 60 degree drop, and reaches speeds of up to 74 mph (119 km/h). Air resistance affects a roller coaster as it is in the air, and it provides a force that is countering the movement of the cart. Students in Pre-AP Algebra II used polynomial parabolas formed by factoring to create 3D printed roller coasters. The “maxima” of a roller coaster must get progressively lower. The figure shows the roller-coaster free-body diagram at the. An individual lap bar and seatbelt secures each rider across the thighs and pelvis. 317 mile),which would take two minutes for the 24-passenger train to reach. The silver miners’ stocks have had a roller-coaster ride of a year, getting sucked into March’s stock panic before skyrocketing out in a massive upleg. Goldfinches fly upward at angle, dive downward in parabolic free-fall, then flap back up again, then repeat. Roller coasters involve many aspects of math, calculus being one of them. But as you can see there, the actual formula for cubic parabola is this one I quoted. This should help in following along with the discussions live and in rec. The video is a congregational song technique for beginners learning to play in A-flat. Additionally, all objects accelerate due to gravity with equal velocities. edu and [email protected] Estimate $40 -$60 Dec 10, 2017. Real World Parabolas Nighthawk-Nighthawk was bulit in 2004. School of Mathematics and Statistics UNSW Sydney NSW 2052 Telephone +61 2 9385 7111 UNSW CRICOS Provider Code: 00098G ABN: 57 195 873 179 Authorised by the Head of School, School of Mathematics and Statistics. In roller coasters, in addition to factors that speed it up, there are factors that slow it down. Real life examples: Why a conic section? Because it can be formed by slicing a right circular cone. Definitions of Roller-Coaster terms. On-Board Sierra Tonante Sierra Tonante is a wooden roller coaster (Fig. Design a roller coaster path using at least five differentiable functions. Notice that the parabolas are symmeric to the Y axis. While waiting in line, Fred notices that part of this coaster resembles the graph of a. Overall, the roller coasters combined. ) One thing you’ve learned about roller coaster construction and at least one change you had to make in your roller coaster during its construction (2 pts. time that you plot will be parabolic. 16, A in a parabolic shape above a level surface between two poles such that its above the surface is gven by the equation Y = -001x. iRubric F3B4C8: *th grade students will design a functional roller coaster using cardboard. This is Millennium Force's third hill, a 182-foot-tall parabolic arc that gives riders a wonderful dose of airtime. Step 1: Choose whether you would like to construct a pattern for a marching band or a roller coaster. Similar to The Racer, Diamondback is a roller coaster filled with parabolic hills. By Fu-Kwun Hwang. elliptic, parabolic, or hyperbolic orbits). Ferrari World's Formula Rossa, the fastest, literally takes riders' breath away at speeds of up to. Those objects are the real life application of the concept of parabola. Proof of Kepler’s First Law. For ambitious thrill-seekers looking to conquer Canada’s tallest and fastest roller coaster, look no further! Leviathan, a mystical sea creature, will take you on the ride of your life. 2020 with Buying Guide. Students apply their knowledge of parabola characteristics to a roller coaster example. Duration of intervals between parabolas can be arranged prior to the flight such as to give enough time to investigators to change an experimental set-up. The company may have purchased 4,760 feet of steel, but Mako is so much more than that. The 737 max also rides like a roller coaster according to the data logs not sure that part was needed here. Give the domain of the function defined below. See full list on futurelearn. Find out how structural engineers use parabolas to design roller coasters. The silver miners’ stocks have had a roller-coaster ride of a year, getting sucked into March’s stock panic before skyrocketing out in a massive upleg. Saddle up and take the ride of your life in 2018. Steel Vengeance is the world's first and only "hyper-hybrid" record-breaking roller coaster. Slow the car down and it will try to take a different parabolic path. The roller coaster is 40 feet above ground one second after the initial drop. 0-G phases were followed by a 2-G pull-out phase, after a very brief 1-G phase again followed by the next 2-G pull-up phase. (g = 10 m/s2). Check out Smartivity Motor Roller Dragon Coaster stem, DIY, Educational, Learning, Building and Construction Toy for Girls reviews, ratings, specifications and more at Amazon. Discussions among coaster enthusiasts can soon become awash in jargon. From there, a 500-metre (1,600 ft. This measurement will be made from the analysis of a video file of. We chatted with Jeff Havlik, a roller coaster design expert with renowned design firm PGAV Destinations , who has designed such coasters at the Manta at SeaWorld Orlando. Kingda Ka, the tallest roller coaster on Earth, drops its passengers a life-flashing 418 feet. The basic principles of roller coaster mechanics have been known since 1665, and since then roller coasters have become a popular diversion. Consider a fountain. Thunder Coaster has a character of its’ own, a great selection of elements and perhaps the most outrageous dose of airtime safely possible on a roller coaster. Between six to sixteen passengers will be allowed inside an enclosed aircraft. She is the illustrator of Everywhere Babies by Susan Meyers, The Seven Silly Eaters by Mary Ann Hoberman, Harriet, You'll Drive Me Wild by Mem Fox, and more. Nicknamed the “vomit comet”, the airplane, like a roller coaster, climbs into the sky, floats momentarily at the top and then slides down to the other side until it reaches the same altitude where it began. Encourage other answers as well. One of their first joint projects was the first German steel roller coaster, Super Acht, which had its pre. Explore content created by others. The site is a highly anticipated addition to the resort that should receive a significant degree of usage because of its very unique geograp | Parsons Advocate. In addition to the high-visibility color innovation, the TI-84 Plus CE calculator’s other key features include: 30% lighter and thinner than earlier. While much-higher prevailing silver. Rational function transformations desmos. Next the plane is "pushed over" the top to begin the zero-gravity segment of the parabolas. edu Received 28 February 2014, revised 19 March 2014 Accepted for publication 24 March 2014. Over 400 Gizmos aligned to the latest standards help educators bring powerful new learning experiences to the classroom. It is expected the rolling body then stops at 92. The equation that best fits the upper rail is:. Every point on the parabola is equidistant from the focus and the directrix. Design A Roller Coaster-An Example Suppose we are asked to design a simple ascent and drop roller coaster. The computations in this proof are based on work from Celestial Mechanics by Harry Pollard (The Carus. It is expected the rolling body then stops at 92. Powered by Parabola & WordPress. 4) Below each of your 5 (or more) curves, write its function. stakeholder management | Productivity Steps. f(x) = 2x2 – 8x + 8, The height of a section of a roller-coaster ride is given by the equation h = for 0 < x < 3. Part B The second part of the new coaster is a parabola. ) One thing you’ve learned about roller coaster construction and at least one change you had to make in your roller coaster during its construction (2 pts. Acceleration calculator is a tool that helps you to find out how fast the speed of an object is changing. Physics of Roller coasters: The starting and ending heights must be horizontal and equal. During the park’s early years in the 1970s, the then-record-breaking Racer was born (and subsequently recognized as the start of the Second Golden Age of Roller Coasters). Weightlessness is felt when the net force is 1 G — for a moment the rider is effectively in free fall. Use a heavy dot to show the transition points. Measurements for a Parabolic Dish. ) Mark the mid-point of the coaster on each frame as you did in the projectile motion lab and then save your data with a unique. The world’s largest aircraft for weightless research, the ‘Zero-G’ Airbus A300, took its 61st and last trip for ESA recently. It's a class favorite. roller blind definition: 1. Greg is in a car at the top of a roller-coaster ride. The domain of a function is the complete set of possible values of the independent variable. WELCOME TO THE WORLD OF PARABOLA Friday, February 27, 2015. • Two Parabolas connect to make the Mcdonald’s M. A roller coaster begins at height 0 with an inital speed of forty meters per second. Conic Form Parabolas by Andrew Ballesteros. The first piece is stopped in midair by a specially contrived explosion in such a way that its subsequent trajectory is straight down to the ground. These manoeuvres can be flown either consecutively in a roller-coaster fashion or separated by intervals of several minutes. Alton Towers in Staffordshire has just opened the world's first 14-loop roller coaster, called The Smiler, while in Abu Dhabi, Ferrari World claims to have the world's fastest coaster. Here's a quick look at the long struggle to find them. Roller coaster, elevated railway with steep inclines and descents that carries a train of passengers through sharp curves and sudden changes of speed and direction for a brief thrill ride. of the loop. Millennium Force. This era introduced a slew of innovative tubular steel. The track matches the natural parabolic path of the car under gravity at that velocity. Some people's earlobes 8. The students will still stay on their group. On this record-breaking giga coaster, riders will ascend a heart-pounding 306 feet to make their way to the highest peak in the park, where they’ll be face. 1) with a maximum height of 32. A 2 kg rock is dropped from a height of 20 m. As the winter progresses, the park is set to wrap up construction of the new coaster and start landscaping along with the fine details of theming. DA: 29 PA: 29 MOZ Rank: 95. "When you launch an object out of a catapult, it follows a parabolic path," she explains. Kreitler cycling rollers have unbeatable quality and performance to help you get the most out of your indoor training time. The safety inspector notes that Ray also needs to plan for a vertical ladder through the center of the coaster’s parabolic shape for access to the coaster to perform safety repairs. But roller coaster thrills don’t happen by accident, and the people who design every loop, bank, inversion, and drop spend literally half a decade crafting your experience. Create a unique parabola in the pattern f(x) = (x − a)(x − b). 5 into the formula, you get that h = 100 feet. Ray needs help creating the second part of the coaster. The equation that best fits the upper rail is:. Good roller coaster names? For my Calculus class, I am making a roller coaster made out of segments obtained from parametric equations. The LeEco Le Pro 3 is a mobile with a great processor, the Snapdragon 820 whilst still remaining one of the best performers on the market in the 2016 year. Start roller coaster parabola. The parabola has many important applications, from the design of automobile headlight reflectors to calculating the paths of ballistic missiles. The first drop is 215′ at 89 degrees, nearly straight down. These manoeuvres can be flown either consecutively in a roller-coaster fashion or separated by intervals of several minutes. use a linear function to describe the rst part of the roller coaster, a quartic to describe the. Roller coasters offer an inherently interesting way to study energy transformation in a system. For total internal reflection to occur…. Roller coasters are fun and at the same time increase your adrenaline rush so all you have to do is to download PSD paper roller coaster templates and see various designs. Slide 8-45. png Center:. The Golden Guide calls it “roller-coaster flight,” and the phrase is now common in my family. In 2010, it was designed and made into a scale model by Julijonas Urbonas, a PhD candidate at the Royal College of Art in London. Another ride placed next to Greg's roller coaster and drives. A wooden coaster is a very hard thing to pull off well, especially modern ones. A 205 foot tall high speed steel roller coaster. It is a steel coaster located in Canada's Wonderland in Vaughan, Ontario, Canada. 5 g up to an angle of 60° at an altitude of 18,000 ft (approx. the zeros are basically the x-intercepts. The dive drop is currently only found on four B&M Wing Coasters; The Swarm at Thorpe Park, X-Flight at Six Flags Great America, GateKeeper at Cedar Point, and Flug der Dämonen at Heide Park. Real World Parabolas Nighthawk-Nighthawk was bulit in 2004. It appears that BRC Imagination Arts, a Southern California design firm, has a zero gravity roller coaster proposal that's waiting for a US$50 million investment. Guests must have a minimum of three functioning extremities. Below is the equation for y =-x². When on a roller coaster, it feels like you’re defeating or going against the force of gravity. Thereafter, the speed of the roller coaster satisfies conservation of energy. Yes sir! It is parabola. Roller coaster designers build the roller coasters for safety but they also keep in mind that the design will still being joy to passenger while still receiving a scare. 3) Then tape or glue one end of the poster paper to the other end. Finally, something very similar to a parabola, known as a projectile, often occurs naturally. Raging Bull is a steel hyper coaster located in Southwest Territory at Six Flags Great America. We are to design the parabolic part in between those two points (Points P & Q) which is represented by curve (f). stakeholder management | Productivity Steps. I need some reasons why a roller coaster is a parabola besides: shape, ascend and decend. WELCOME TO THE WORLD OF PARABOLA Friday, February 27, 2015. The following example shows how to write the program to incorporate multiple components in the design of a more complex circuit. F(x) = (x-1)1/2 - Answered by a verified Tutor. People of all different ages rides in roller coasters but only a few notice that most roller coasters are in the form of parabola. 53 inches away from the initial dropping point. This is why we measure the forces exerted by coasters in units of G's, where 1 G represents the force the rider experiences while sitting stationary in the earth's gravitational field. Below you will find a graph comparing the horizontal and vertical distances of a portion of the roller coaster track, with key points labeled. Use a heavy dot to show the transition points. A parabola: note the axes are not to scale. The cubic parabola is a simplification of the clothoid – by selecting only the first term of the equation for X and Y. The figure shows the roller-coaster free-body diagram at the. You will end up with a 3-dimensional representation of your roller coaster. Roller Coasters By devastating “what would be the perfect roller coaster? ” Would it be one that does all those different turns, one that goes faster than any ride is allowed to go, or a ride that A ride with a bunch of turns would definitely make a roller coaster exciting. com/stores/theme-park-crazy-store Background music by NCTRNM. Instead of the simulation we had to design a poster and put in the group roles, the parabolas equations, and 3 real life examples of parabolas seen in every day places. The planners decide that the following section, for 3 < x < 5, should take the shape of a parabola, with the height given by the equation h = g(x) = kx² + 16x – 28. School of Mathematics and Statistics UNSW Sydney NSW 2052 Telephone +61 2 9385 7111 UNSW CRICOS Provider Code: 00098G ABN: 57 195 873 179 Authorised by the Head of School, School of Mathematics and Statistics. Pernahkah Anda berpikir bahwa roller coaster menerapkan konsep fisika? Pada roller coaster, penumpang menaiki kendaraan yang tidak bermesin. 8 m/s 2 downward. Specifically, at Six Flags Magic Mountain in California, we can observe a parabola on the ride "Goliath" as the tracks start to build up to prepare for a long, thrilling drop. The roller coaster goes up in a curvy upside down "U" shape and then stops halfway before it. The points chosen for the simulation are $$A=(0, 0)$$ and $$B=(3, -2)$$. Thunder Coaster has a character of its’ own, a great selection of elements and perhaps the most outrageous dose of airtime safely possible on a roller coaster. Math Advisor: [email protected] "When you launch an object out of a catapult, it follows a parabolic path," she explains. Other parabolas have equtions of the form "ay² + by +c" and "a(y-h)² + k". While much-higher prevailing silver. ROLLER COASTER POLYNOMIALS APPLICATION PROBLEMS: Fred, Elena, Michael, and Diane enjoy roller Coasters. ISBN 0313319987 (alk. Explore content created by others. com, edit images, videos and 360 photos in one place. • Two Parabolas connect to make the Mcdonald's M. Themed and named after the "ferocious beast" that once terrorized the citizens of Southwest Territory, Raging Bull is the signature attraction of the area. would be 8. , while Europe, which is suffering from dampening demand, is facing a deceleration of inflation to 1% YoY. Every point on the parabola is equidistant from the focus and the directrix. Any parabola can be repositioned and rescaled to fit exactly on any other parabola—that is, all parabolas are geometrically similar. On a traditional. When copies of this exact same coaster opened at Kings Island and Kings Dominion, it was largely seen as a unique if middling family coaster for their. All 5 coasters were built over Novemember-December 2009, as students learned and applid the concepts of slope and intercept. The planners decide that the following section, for 3 < x < 5, should take the shape of a parabola, with the height given by the equation h = g(x) = kx² + 16x – 28. The arch for the roller coaster also represents a parabola. com December 20, 2017 January 16, 2018 Stocks. edu Phone: 614 292-4975 Fax: 614 292-1479. Words That Rhyme with rollick. Find out about the contribution of math to roller coasters with help from an expert in. We had a change in the assignments. 6 km), followed by a free-fall ballistic trajectory lasting 45 s of 0. LaMarcus Adna Thompson obtained a patent regarding roller coasters on January 20, 1885, which were made out of wood , but this patent is considerably later than the "Russian mountains" described in the article. You decide that the slope of the ascent should be 0. 50 to$4 in just 3 weeks. Much like a roller coaster, the aircraft has gut-wrenching effects that have earned it the nickname, the "Vomit Comet. Members-Only Access. Similar to this image below: This specific roller coaster is The Behemoth. Timberliner Train Upgrades Timberliner wooden coaster trains are designed to be used with standard wooden track, so it is possible to use Timberliners on rides that are both new and old. Projectile Motion. A roller coaster begins at height 0 with an inital speed of forty meters per second. It is a steel coaster located in Canada's Wonderland in Vaughan, Ontario, Canada. A ball is shot into the air from the edge of a building, 50 feet above the ground. Need help with math? Start browsing Purplemath's free resources below! Practial Algebra Lessons: Purplemath's algebra lessons are informal in their tone, and are written with the struggling student in mind. NASA provides the experience of microgravity by making its astronauts rove in a huge airplane. time that you plot will be parabolic. Calculating the t value, you get that the vertex occurs where t = 1. Share the best GIFs now >>>. The students were charged with the question below to begin their Project Based Learning Assignment:. The roller coaster is 40 feet above ground one second after the initial drop. I'm assuming that you are designing a more conventional coaster, though. Roller coasters without differential equations 837 r rt() x y z e N e T e B Figure 1. Similar to this image below: This specific roller coaster is The Behemoth. World's largest library of math & science simulations. Eight Circles in a Parabola. Find GIFs with the latest and newest hashtags! Search, discover and share your favorite Rollercoaster GIFs. com/stores/theme-park-crazy-store Background music by NCTRNM. intersect this parabola? 1. Good roller coaster names? For my Calculus class, I am making a roller coaster made out of segments obtained from parametric equations. Any parabola can be repositioned and rescaled to fit exactly on any other parabola—that is, all parabolas are geometrically similar. Step 3: Label the parabolas/quadratic functions displayed in your design. Last edited by peachpuff on Tue Jun 04, 2019 3:41 pm captain numerica. Parabolas in real life THERE ARE EVEN PARABOLAS IN REAL LIFE, AND THESE ARE SOME OF IT In tourist spots. Like: This picture of a roller coaster in the shape of a parabola. These manoeuvres can be flown either consecutively in a roller-coaster fashion or separated by intervals of several minutes. “A physicist would say that the seat was exerting a force on you—they'd call it a normal force,” Pope says. They figure out the equation of the. Pictured is the green alligator (mini-roller coaster) giving some children an exhilarating ride. It was found on her room and it makes a parabola that opens up. According to this model, which of the following represents the lowest height the. However, make sure that you understand how Newton's laws work and how they relate to a real-life roller coaster. Below is a list of coaster terms used by enthusiasts when discussing their favorite subject. Real World Parabolas It is made of Steel and got 21st place in the Golden Ticket Awards for the best steel roller coaster in 2010. Two Circles in a Parabola. • It is also used when making roller coasters because the points that connect the roller coaster are the same distance away from the focus, it is able to create a parabola that is concave down. Cheap Price! Check Now!. com/themeparkcrazy MERCH LINK: https://teespring. the vertex is (2 , 8) this porabola has a maximum becuase it opens up so the maximum. This simulation lets students choose from 5 track configurations or create their own design, then watch the resulting motion. This roller coaster was designed by Werner Stengel and built by the manufacturer Intamin. The normal vector is directed to the centre of the local circle of. A roller coaster rider is in an accelerated reference frame, which by the Principle of Equivalence, is physically equivalent to a gravitational field. Rational function transformations desmos. 3t) rad, and z = (-8 cos) m, where t is measured in seconds, Determine the magnitudes of the car's velocity and acceleration when t = 4s. Below is the equation for y =-x². Since the top of a roller coaster seems to resemble a parabola your first suggestion is to find a parabola y = ax' + bx + c that will connect the two points. Parabolas are quadratic functions. Elite V-Arion Parabolic Inertial Rollers with Integrated Resistance Unit. The video starts with an explanation followed by a test run (around the 1:50 mark). f(x) = 2x2 – 8x + 8, The height of a section of a roller-coaster ride is given by the equation h = for 0 < x < 3. It is expected the rolling body then stops at 92. The minimum value of the curve corresponds to the maximum height of the cart, where potential energy is at a maximum, and P P0 is the point of maximum kinetic energy. By changing the angle and location of the intersection, we can produce different types of conics. Another way to create a. elliptic, parabolic, or hyperbolic orbits). This Demonstration maps the motion of an object with a mass of 10 kg that splits into two pieces at the peak of its flight. A Roller Coaster Approach to Satellite Re-positioning November 16, 2019 November 15, 2019 Doug Messier News Leave a comment EL SEGUNDO, Calif. the ride is connected. But the advent of steel coasters did not displace the beloved wooden coasters, or “woodies,” which were also instrumental in. For this campaign, that involved a pre-Test Readiness Review (TRR) on Friday, June 5 th, followed by the actual TRR the following Monday (as well as loading the plane and installing hardware), four consecutive flight days (Tuesday-Friday), unloading the aircraft on Friday, with a backup day on Saturday. As most students explore mathematics, they are the first type of graph that deviates from the world of straight lines. Whenever a new roller Coaster opens near their town, they try to be among the first to ride. While much-higher prevailing silver. Use a heavy dot to show the transition points. You can make free roller coaster templates if you plan to open up an amusement park or have a roller coaster in your child’s birthday. Measurements for a Parabolic Dish. Architects or builders might be able to use parabolas to shape the bridges, roller coasters, and arches we saw. Part B The second part of the new coaster is a parabola. Galileo had demonstrated this. Weightlessness is felt when the net force is 1 G — for a moment the rider is effectively in free fall. When we are dealing with these kinds of equations like for example Y=(x+5)^2 the h value is 5, but on a graph the actual co-ordinate for that x-value is -5. Fourty years later, Werner Stengel started working on roller coasters together with Anton Schwarzkopf. A quadratic is graphed as a parabola, but often you don't see rollercoasters shaped like that. These contact forces allow for the marble to. Parabolas are quadratic functions. Greg is in a car at the top of a roller-coaster ride. Raging Bull is a steel hyper coaster located in Southwest Territory at Six Flags Great America. Eyebrows 7. Show trails of the projectile, turn on simple air friction. While much-higher prevailing silver. The silver miners’ stocks have had a roller-coaster ride of a year, getting sucked into March’s stock panic before skyrocketing out in a massive upleg. png Center:. The planners decide that the following section, for 3 < x < 5, should take the shape of a parabola, with the height given by the equation h = g(x) = kx² + 16x – 28. Suppose you are asked to design the first ascent and drop for a new roller coaster. ) Any sound waves entering a parabolic dish parallel to the axis of symmetry and hitting the inner surface of the dish are reflected back to the focus. roller blind definition: 1. For example, if friction is ignored, the speed of a rollercoaster at any point along the track is determined by how far it has fallen from rest at the top of the coaster’s highest hill. from drug dosages to financial investments, launching satellites to servicing roller-coasters, predicting populations and their growing needs, researchers and developers refer to a set of trusted equations. Roller Coaster Track Leading up to the High Point, Against a Blue Sky with Puffy Clouds and Sun Rays #1365954 by AtStockIllustration Canival Roller Coaster Ride #1257209 by Graphics RF Roller Coaster Cart #1254845 by Graphics RF. According to this model, which of the following represents the lowest height the. It is a steel coaster located in Canada’s Wonderland in Vaughan, Ontario, Canada. WELCOME TO THE WORLD OF PARABOLA Friday, February 27, 2015. 2) Parabola Card Sort - In this activity, students are given graphs of parabolas. These contact forces allow for the marble to. Elite V-Arion Parabolic Inertial Rollers with Integrated Resistance Unit. Start roller coaster parabola. 6 km), followed by a free-fall ballistic trajectory lasting 45 s of 0. The height reached by the rolling solid at the end of the last parabola is about 4. Students adjust the scale of the graph to match the dimensions of the real life parabola. Note: Whether the parabola is facing upwards or downwards, depends on the nature of a. Encourage other answers as well. Its unceasing speed, as it hurtles over 14 hills and crisscrosses throughout the immense wooden structure a dizzying ten times, has caused even the most grizzled of thrill seekers to pray he'll roll back into the station in one piece. By studying the photographs of your favorite coasters, you decide to make the slope of the ascent \$0. The full skirt of this Disney Princess dress I saw at the mall 9. (If you examine the video, you will see that the coaster is segmented into six cars and the mid-point is between two segments. roller coaster (see figure 1). It's a hypercoaster, designed for "airtime. A projectile is an entity thrown into the air or into space. One simple way to model the motion of a roller coaster is to assume there is no friction, and that nothing rolls. Hyperbola is a symmetrical open curve formed by the intersection of a cone with a plane at a smaller angle with its axis than the side of the cone. Use a heavy dot to show the transition points. It has been copied but never matched and our policy of continuous improvement means it is now better than ever before. Real World Parabolas Nighthawk-Nighthawk was bulit in 2004. The track can only push on the wheels of the car, it cannot pull, therefore presses. A roller coaster goes over a hill of radius 30 m 30\text{ m} 3 0 m. Graphing Quadratic Inequalities A quadratic inequality of the form y > a x 2 + b x + c (or substitute , ≥ or ≤ for > ) represents a region of the plane bounded by a parabola. Real World Parabola. You decide to connect these two straight streches y = L1(x) and y = L2(x) with part of a parabola y = f(x) = ax2 + bx + c, where x and f(x) are measured in feet. Parabolas In Real Life Roller Coaster. Suppose you are asked to design the first ascent and drop for a new roller coaster. Introduction. This WebQuest is designed for 10th-12th grade high school mathematics students who are learning how to create quadratic equations by viewing parabolic graphs. Because of these two properties of physics, on certain parabolic hills, a roller coaster can be considered in free fall (think of the roller coaster as being ‘thrown’ up the parabola, on what path will it fall?) And because the rider of the roller coaster falls at. When going to an amusement park, many people do not observe the different structures of roller coasters. ISBN 0313319987 (alk. The company may have purchased 4,760 feet of steel, but Mako is so much more than that. The cubic parabola is a simplification of the clothoid – by selecting only the first term of the equation for X and Y. As part of their internship, they get to assist in the planning of a brand new roller co. Those objects are the real life application of the concept of parabola. png Center:. It's a class favorite. 3t) rad, and z = (-8 cos) m, where t is measured in seconds, Determine the magnitudes of the car's velocity and acceleration when t = 4s. DA: 29 PA: 29 MOZ Rank: 95. In this case, the train may literally not be touching the track at all. Under what conditions does the parabola open up versus open down. SeaWorld Orlando's new Mako rollercoaster is the area's fastest, longest and tallest coaster. The Johnson Space Center, for example, operates a low-g research aircraft that flies in a path shaped like roller coaster hills, which we call parabolas. 4) For the parabola that must have an equation with a positive a-value (positive leading coefficient), draw a bird in the sky above this parabola. The first parabola that is formed has an equation of y= -x^2, where as the second parabola shows how the parabola has shifted to the right the equation of this parabola is y= -(x-h) ^2. Find GIFs with the latest and newest hashtags! Search, discover and share your favorite Rollercoaster GIFs. 8 m/s 2 downward. Describe the direction of the parabola and determine the y-intercept and zeros. 3) For both parabolas, draw a flag pole where each axis of symmetry is on the roller coaster. It is a steel coaster located in Canada’s Wonderland in Vaughan, Ontario, Canada. But the advent of steel coasters did not displace the beloved wooden coasters, or “woodies,” which were also instrumental in. Parabolas are suggested because the standard form equation allows you to place the vertex at the desired height and translate them easily. Hyperbola is a symmetrical open curve formed by the intersection of a cone with a plane at a smaller angle with its axis than the side of the cone. From there, a 500-metre (1,600 ft. “A good roller coaster is not about a single gimmick, such as one steep drop,” said Kris Rowberry, who has followed the industry closely for 15 years, visited over 40 theme parks and ridden over 200 roller coasters. Roller coaster patents are found on the non-U. Introduction Slowly, the train rolls out from the station, down a very gentle slope, and turning right before the train reaches the chain of the 40 o lift-hill (Fig. Most people already know that roller coasters can be found in science, but now you can know that math can be found in roller coasters as well. The contribution of math to roller coasters involves the speeds and angles of the tracks and the cars. The sun’s rays reflect off the parabolic mirror toward an object attached to the igniter. The planners decide that the following section, for 3 < x < 5, should take the shape of a parabola, with the height given by the equation h = g(x) = kx² + 16x – 28. Your job is to find many different kinds of uses for parabolas. The coaster reaches a height of 510-metres(1,670 ft) (0. The more energy a roller coaster has at the beginning of the ride, the more successful the ride. 3) For both parabolas, draw a flag pole where each axis of symmetry is on the roller coaster. The insides of the A300 Zero-G looks mostly like a normal airplane save for the middle section having white foam mats on the floor, ceiling and wall. The computed velocities along the first five parabolas that define the roller coaster’s path. A 725 kg roller coaster car is at the top of a roller coaster car is at the top of a roller coaster as shown below. And you can model this parabola using Geogebra. Vortex (Calaway Park) Vortex (California's Great America), a stand-up roller coaster at California's Great America in Santa Clara, California, United States; Vortex (Canada's Wonderland), a suspended roller coaster at Canada's Wonderland in Vaughan, Ontario, Canada. School of Mathematics and Statistics UNSW Sydney NSW 2052 Telephone +61 2 9385 7111 UNSW CRICOS Provider Code: 00098G ABN: 57 195 873 179 Authorised by the Head of School, School of Mathematics and Statistics. And you, frankly, probably wouldn't be expected to solve a problem like this in most first-year physics class. A large parabolic antenna was installed nearby at Mount Kempis. The most popular cryptocurrency, Bitcoin reached new high in 2019, jumping above USD 7,000, the level last seen in September 2018, while the rest of the crypto market is registering strong gains, also. 2) Roller Coasters that arc up and down and sometimes around - the one ride I avoid! When a coaster falls from the peak (vertex) of the parabola, it is rejecting air resistance, and all the bodies are falling at the same rate. The year after Cedar Fair took over ownership, the park received its third coaster, Rails , a small steel coaster that previously operated as Wild Cat at Cedar Point. As it takes 2-3 years for a roller coaster to be planned and built, the decision to invest in Mako had to have been made in the recent aftermath of Blackfish. Scary Roller Coasters Crazy Roller Coaster Cool Coasters Six Flags Universal Studios Great America Amusement Park Rides Cedar Point Parks. TOUCH this image to discover its story. The insides of the A300 Zero-G looks mostly like a normal airplane save for the middle section having white foam mats on the floor, ceiling and wall. Also, B&M's reference to an "in-line" inversion element which can be found on their Sit-down and Stand-up roller coasters. The layout of Love Ran Red can be compared to a roller-coaster effect due to the different variations of slow to fast paced songs that are played throughout the album. This roller coaster was built in Cedar Point, Ohio in 2003. Kings Island’s ride lineup is both historic, legendary, and impressively intimidating. Most people already know that roller coasters can be found in science, but now you can know that math can be found in roller coasters as well. One Saturday, the four friends decide to ride a new coaster. Machine Term sometimes used in reference to a roller coaster. These manoeuvres can be flown either consecutively in a roller-coaster fashion or separated by intervals of several minutes. The second part of the new coaster is a parabola. With Tenor, maker of GIF Keyboard, add popular Roller Coaster animated GIFs to your conversations. the zeros are basically the x-intercepts. Parabolas in Architecture and Engineering. " She then patiently tries to sweep the cobwebs from the math corners of my brain. At point B, it has a speed of 25 m/s, which is increasing at the rate of 3 m/s^2. A roller coaster rider is in an accelerated reference frame, which by the Principle of Equivalence, is physically equivalent to a gravitational field. 1) Write a piecewise defined function, f, that will describe the rollercoasters transition from the inclined line to the parabola to the declined line. For ease of calculations, the points where the horizontal part meet the coaster do not need to be differentiable. WELCOME TO THE WORLD OF PARABOLA Friday, February 27, 2015. The silver miners’ stocks have had a roller-coaster ride of a year, getting sucked into March’s stock panic before skyrocketing out in a massive upleg. Describe the direction of the parabola and determine the y-intercept and zeros. A Guide for Babies of All Ages; Santa Claus, the World's Number One Toy Expert; Roller Coaster; and Hush, Little Baby. Start roller coaster parabola. During the park’s early years in the 1970s, the then-record-breaking Racer was born (and subsequently recognized as the start of the Second Golden Age of Roller Coasters). Its initial velocity is 20 feet per second. Also my favorite roller coaster that has a circle in it is the Carolina Cobra, both found at Kings Dominion. roller coaster (see figure 1). When on a roller coaster, it feels like you’re defeating or going against the force of gravity. I'm assuming that you are designing a more conventional coaster, though. Yes sir! It is parabola. First of all, roller coaster designers have to know the maximum velocity of a train without it going off the track. Log in above or click Join Now to enjoy these exclusive benefits:. The duration of the intervals between parabolas is determined prior to the flight to give the investigators enough time to change the experimental set-up. -Vekoma built and designed this roller coaster-In 2008, the Borg Assimilator was renamed to Nighthawk. By Arthur S. Notice that the parabolas are symmeric to the Y axis. The computations in this proof are based on work from Celestial Mechanics by Harry Pollard (The Carus. Saddle up and take the ride of your life in 2018. Instead of the simulation we had to design a poster and put in the group roles, the parabolas equations, and 3 real life examples of parabolas seen in every day places. While much-higher prevailing silver. 110 Slider Hammers Rare 4wd Vaterra Buggy Twin Yeah Chassis Roller Race Rock Twin Yeah Race Chassis 4wd Rare Roller Hammers 110 Slider Vaterra Rock Buggy Rare Vaterra Twin. Another way to create a. Guests must have a minimum of three functioning extremities. As the winter progresses, the park is set to wrap up construction of the new coaster and start landscaping along with the fine details of theming. Millennium Force is the second fastest ride @ Cedar Point, clocking in at 93 MPH. Solving –16t 2 + 48t + 64 = 0, you factor to get –16(t – 4)(t + 1) = 0. I have seen problems where it can be written as a sine curve, but not a parabola. Part a the first part of ray and kelsey's roller coaster is a curved pattern that can be represented by a polynomial function. This coaster was built in they year 2000 and is made of steel. The new coaster (the world’s tallest, longst, and fastest dive coaster) is set to debut this spring. The Gateway Arch is known world-wide as the symbol of the city of St. ) Mark the mid-point of the coaster on each frame as you did in the projectile motion lab and then save your data with a unique. The first drop is 215′ at 89 degrees, nearly straight down. On a traditional. Synonyms for Parabolic Flight in Free Thesaurus. Designing roller coasters is a Jekyll-and-Hyde job: The first priority is to make riders safe; the second is to. Definitions of Terms used for Roller Coasters. Parabolas. • It is also used when making roller coasters because the points that connect the roller coaster are the same distance away from the focus, it is able to create a parabola that is concave down. While much-higher prevailing silver. 5 with a horizontal displacement of 20ft. The second part of the new coaster is a parabola. This sequence was flown for 10 parabolas, then a 1-G horizontal flight period was inserted. Members-Only Access. Describe the direction of the parabola and determine the y-intercept and zeros. Raging Bull is a steel hyper coaster located in Southwest Territory at Six Flags Great America. And we want "a" to be 200, so the equation becomes: x 2 = 4ay = 4 × 200 × y = 800y. We are given a starting slope (line L 1) (The ascent) that the company would like this roller coaster to build built to. (If you examine the video, you will see that the coaster is segmented into six cars and the mid-point is between two segments. Since gravity always provides a 1 G downward force, this means that the centripetal force must be 0. Powered by Parabola & WordPress. Designing roller coasters is a Jekyll-and-Hyde job: The first priority is to make riders safe; the second is to make. Parabolas. Basic Shapes - Even Degree (Intro to Zeros) 1 - Cool Math has free online cool math lessons, cool math games and fun math activities. Roller coasters are considered by many to be the most exciting ride in any amusement park. Used by the Space Agency to train astronauts, the plane, often called Vomit Comet helps scientists simulate space conditions by performing 30-40 parabolic plunges, each creating about 20-25 seconds of microgravity. Lifen Yang Math. Raging Bull is a steel hyper coaster located in Southwest Territory at Six Flags Great America. If you want to build a parabolic dish where the focus is 200 mm above the surface, what measurements do you need? To make it easy to build, let's have it pointing upwards, and so we choose the x 2 = 4ay equation. It is the distance between the focus and the vertex. Roller coasters without differential equations 837 r rt() x y z e N e T e B Figure 1. 2) You will then cut out the roller coaster so that the top of it is your track. From there, a 500-metre (1,600 ft. Other Results for Answer Key For All Gizmos: ExploreLearning Gizmos: Math & Science Simulations. Rel ini ditopang oleh rangka baja yang disusun sedemikian rupa. In order to make the ride smooth, a section of track following the graph of the function y=a(x-40)^2 is added as a transition from the parabola to the line. The coaster enters the tunnel three seconds after the first descent starts and stays in the tunnel for a total of 2 seconds. A math connection is made to parabolas. Riders will experience weightlessness and rapid movements from side to side. Applications of the Parabola. A quadratic is graphed as a parabola, but often you don't see rollercoasters shaped like that. Parabolas are suggested because the standard form equation allows you to place the vertex at the desired height and translate them easily. 3t) rad, and z = (-8 cos) m, where t is measured in seconds, Determine the magnitudes of the car's velocity and acceleration when t = 4s. 2020 with Buying Guide. You are welcome to contact us with tips or feedback. What started as a fun way for the theme parks to squeeze a bit of extra cash out of the customers. A Guide for Babies of All Ages; Santa Claus, the World's Number One Toy Expert; Roller Coaster; and Hush, Little Baby. The Gravity Group is committed to keeping wooden roller coasters a permanent part of the thrill landscape. Also, anyone who rides a roller coaster will be familiar with the rise and fall created by the track's parabolas. look at how many Bitcoin did running from 1000 to 20,000. roller coaster The objects have all curves lines. 2: Definitions of Roller-Coaster terms: M-R. • Two Parabolas connect to make the Mcdonald’s M. 3) For both parabolas, draw a flag pole where each axis of symmetry is on the roller coaster. The minimum value of the curve corresponds to the maximum height of the cart, where potential energy is at a maximum, and P P0 is the point of maximum kinetic energy. The only force here is gravity. A body is forced to move along a given trajectory. From wikipedia:. However, the parabola is too narrow. It is a steel coaster located in Canada’s Wonderland in Vaughan, Ontario, Canada. I learned that most of the time the roller coasters are doing the spine-tingling stunts they are all applied solely because of gravity, potential energy, and kinetic energy. From pint-sized starters like Woodstock Express to the 120-mph Top Thrill Dragster - if you love roller coasters, this is the only place you need to be. roller coaster. of the loop. Parabolas are a set of points in one plane that form a U-shaped curve, but the application of this curve is not restricted to the world of mathematics. It is expected the rolling body then stops at 92. In the interval (- inf , 2) the graph of f is a parabola shifted up 1 unit. We had every thing done and ready. The coaster's 310-foot first hill gave it the title of the "World's Tallest Full-Circuit Roller Coaster" from its opening in May 2000 until Steel Dragon 2000. This activity requires students to show they understand key quadratic characteristics: minimums, maximums, axis of symmetry, a-values, and finding the vertex from an equation. Its initial velocity is 20 feet per second. Kings Island’s ride lineup is both historic, legendary, and impressively intimidating. All aspects of the roller coaster (the marble to the track, the cardboard to the base) are pushing on each other with an equal but opposite forces. Roller Coasters "technic & escolar Aspects" (Montañas Rusas aspectos "tecnicos y escolares") En este blog, se mostraran investigaciones, y aspectos tecnicos sobre las montañas rusas, mejor dicho como "Roller Coasters" y recomendaremos algunas de las mejores del mundo, Investigaremos alturas, velocidades, etc. While much-higher prevailing silver. The projection motion is one kind of motion. The first piece is stopped in midair by a specially contrived explosion in such a way that its subsequent trajectory is straight down to the ground. Also, anyone who rides a roller coaster will be familiar with the rise and fall created by the track's parabolas. Frontal Hairstyles Roller Coaster Golden Gate Bridge Algebra Architecture Life Travel Math Google Search. For a short time the position of a roller-coaster car along its path is defined by the equations r = 25 m, = (0. The computations in this proof are based on work from Celestial Mechanics by Harry Pollard (The Carus. 2: Definitions of Roller-Coaster terms: C-D. An 850 kg roller-coaster is released from rest at Point A of the track shown in the figure. Specifically, at Six Flags Magic Mountain in California, we can observe a parabola on the ride "Goliath" as the tracks start to build up to prepare for a long, thrilling drop. edu and [email protected] All 5 coasters were built over Novemember-December 2009, as students learned and applid the concepts of slope and intercept. Parabolas In Real Life Roller Coaster. The 2016 Patrick County Agricultural Fair was successful, with a good turnout each evening and fun for kids. Kingda Ka, the tallest roller coaster on Earth, drops its passengers a life-flashing 418 feet. roller coaster. 8 m/s 2 downward. parabolas in real life. This principle is used in lithotripsy , a medical procedure for treating kidney stones. This gigantic thrill machine opened in 2009, and it is currently the tallest and fastest roller coaster at Kings Island at 230 feet tall with a top speed of 80 mph. The location of this coaster is is Cedar Point in Sandusky, Ohio. No air resistance ; We will not count in the effects of air resistance, either. And that does not include Michele Bachmann, who enjoyed her own rise and fall before Rick Perry’s parabolic flight. Part B The second part of the new coaster is a parabola. We Happen Helping the community and having people benefit from it is a great feeling and thi. This article is from the Roller Coaster FAQ, by Geoff Allen [email protected] An 850 kg roller-coaster is released from rest at Point A of the track shown in the figure. The lift hill is 175′ and the highest peak stands at 190′ tall. Lifen Yang Math. The coaster enters the tunnel three seconds after the first descent starts and stays in the tunnel for a total of 2 seconds. Roller coaster The roller coaster is an amusement ride developed for amusement parks and modern theme parks. Similar to this image below: This specific roller coaster is The Behemoth. They all portray a figure of parabola. This WebQuest is designed for 10th-12th grade high school mathematics students who are learning how to create quadratic equations by viewing parabolic graphs. All aspects of the roller coaster (the marble to the track, the cardboard to the base) are pushing on each other with an equal but opposite forces. You will end up with a 3-dimensional representation of your roller coaster. There was a parabola involved in the path of the flight of one of Homer's rockets. A parabola is an open plane curve formed by the intersection of a cone with a plane parallel to it’s side. I need some reasons why a roller coaster is a parabola besides: shape, ascend and decend. Find out how structural engineers use parabolas to design roller coasters.
# How to show that for any $r>0$ , $\lim_{n\rightarrow\infty}\int_{\mathbb{R}\backslash[-r,r]}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}=0$ I'm trying to show that that sequence $\sqrt{\frac{n}{\pi}}e^{-nx^{2}}$ converges to dirac delta function. Proving it, I need to show that for any $r>0$ $$, \lim_{n\rightarrow\infty}\int_{\mathbb{R}\backslash[-r,r]}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}=0$$ So, I have $$\lim_{n\rightarrow\infty}\int{}_{\mathbb{R}\backslash[-r,r]}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}=\lim_{n\rightarrow\infty}\left(\int_{\mathbb{R}}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}-\int_{[-r,r]}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}\right)\\=\lim_{n\rightarrow\infty}\left(1-\int_{-r}^{r}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}\right)\\=\lim_{n\rightarrow\infty}\left(1-\int_{-r\sqrt{n}}^{r\sqrt{n}}\frac{1}{\sqrt{\pi}}e^{-t^{2}}dt\right)$$ Intuitively, It is zero. But How can I show it rigorously? • So: first the dirac delta is a distribution, not a function. Secondly: your last limit clearly converges to $1$, as one can easily apply the inequality $|\int_a^b f\ dx|\leq (b-a)\sup_{x\in(a,b)}|f(x)|$, and see that the latter converges to $0$, as the supremum of $f$ is bounded, and the interval length goes to $0$. Are you sure you need to prove what you claim? – b00n heT Jun 6 '16 at 6:43 • @b00nheT In this case, $(b-a)\sup _{x\in(a,b)} |f(x)|=2r\sqrt n |f(0)|$ and seems not to be convergent as $n\rightarrow \infty$, isn't it? – Darae-Uri Jun 6 '16 at 6:49 • you are most definitely right... I thought the boundary terms read $r/\sqrt{n}$, but I was clearly mistaken. So yes, you are correct, and to prove that just use the fact that $e^{-t^2}$ is an integrable function, so that the limit for $n\rightarrow \infty$ converges to the well defined integral $\int_\mathbb{R}\frac{1}{\sqrt{\pi}}e^{-t^2}dt$, as $r\sqrt{n}\rightarrow \infty$. for any $r>0$. – b00n heT Jun 6 '16 at 7:00 In most cases, you can explicitly bound the integrand by integrable functions so that you can integrate and then take the limit, especially if you want to prove the limit is zero. $\def\nn{\mathbb{N}}$ $\def\rr{\mathbb{R}}$ $\def\less{\smallsetminus}$ Given $r > 0$ and as $n \in \nn \to \infty$: Thus $\int_{\rr\less[-r,r]} \sqrt{\frac{n}{π}} e^{-nx^2}\ dx \le 2 \int_r^\infty \sqrt{\frac{n}{π}} \frac{x}{r} e^{-nx^{2}}\ dx = \left[ - \sqrt{\frac{n}{π}} \frac{1}{rn} e^{-nx^2} \right]_r^\infty \to 0$. I'll leave the last limit for you to check, but it's kind of obvious. Note that it is crucial that $r > 0$. • Look! No hard-to-evaluate integrals! – user21820 Jun 6 '16 at 7:24 Hint. Since $$\int_{-\infty}^{\infty}\frac{1}{\sqrt{\pi}}e^{-t^{2}}dt=1$$ then we have $$\lim_{n\rightarrow\infty}\left(1-\int_{-r}^{r}\sqrt{\frac{n}{\pi}}e^{-nx^{2}}\right)=\lim_{n\rightarrow\infty}\left(1-\int_{-r\sqrt{n}}^{r\sqrt{n}}\frac{1}{\sqrt{\pi}}e^{-t^{2}}dt\right)=0.$$ • Hint you gave me is already in my question. See the 6th line of question. – Darae-Uri Jun 6 '16 at 7:07 • @Darae-Uri Please, unless you would like to prove that, as $z \to \infty$, we have something like $\text{erf}(z)=1-\frac{1}{\sqrt{\pi }}e^{-z^2} \left(\frac{1}{ z}-\frac{1}{2 z^3}+O\left(\frac{1}{z^4}\right)\right)$, the hint I gave is OK. – Olivier Oloa Jun 6 '16 at 7:24 A probabilistic approach. $$f_n(x) = \sqrt{\frac{n}{\pi}} e^{-nx^2}$$ is the density function of a random variable $X_n$ that is normally distributed, with mean zero and variance $\sigma_n^2 = \frac{1}{2n}$. By Chebyshev's inequality, $$\int_{\mathbb{R}\setminus[-r,r]}f_n(x)\,dx = \mathbb{P}[|X_n|>r\sqrt{2n}\sigma_{n}]\leq \frac{1}{2nr^2}$$ and the RHS tends to zero as $n\to +\infty$ for any $r\in\mathbb{R}^+$.
# zbMATH — the first resource for mathematics The holographic Weyl anomaly. (English) Zbl 0958.81083 Summary: We calculate the Weyl anomaly for conformal field theories that can be described via the AdS/CFT correspondence. This entails regularizing the gravitational part of the corresponding supergravity action in a manner consistent with general covariance. Up to a constant, the anomaly only depends on the dimension d of the manifold on which the conformal field theory is defined. We present concrete expressions for the anomaly in the physically relevant cases $$d = 2, 4$$ and 6. In $$d = 2$$ we find for the central charge $$c = 3 l/ 2 G_N$$ in agreement with considerations based on the asymptotic symmetry algebra of $$\text{AdS}_3$$. In $$d = 4$$ the anomaly agrees precisely with that of the corresponding $$N = 4$$ superconformal SU$$(N)$$ gauge theory. The result in $$d = 6$$ provides new information for the $$(0, 2)$$ theory, since its Weyl anomaly has not been computed previously. The anomaly in this case grows as $$N^3$$, where $$N$$ is the number of coincident M5 branes, and it vanishes for a Ricci-flat background. ##### MSC: 81T30 String and superstring theories; other extended objects (e.g., branes) in quantum field theory 81T40 Two-dimensional field theories, conformal field theories, etc. in quantum mechanics 83E30 String and superstring theories in gravitational theory 81T50 Anomalies in quantum field theory Full Text:
# Can you help with the laplace transform of derivative of sin (at) 4 posts / 0 new lizzie Can you help with the laplace transform of derivative of sin (at) Can you help with the laplace transform of derivative of sin (at) fitzmerl duron I think of two versions of that problem. 1.) I got to get the derivative of $\sin(at)$ first before getting the Laplace transform of the derivative of $\sin(at).$ 2.) This form: $\mathcal L\{ \frac{d}{dt}(sin (at)) \}$ I think I go do the #1.......because there is another operation that deas with Laplace transforms of derivatives. fitzmerl duron To get the Laplace transform of derivative of $\sin (at)$, get first the derivative of $\sin (at)$. So we let $u = at$ and $du = a.$ Then recall that $\frac{d}{dx}(\sin u) = \cos(u) du$. So.... $$\frac{d}{dx}(\sin u ) = \cos (u) du$$ $$\frac{d}{dx}(\sin (at) ) = \cos (at) (a)$$ $$\frac{d}{dx}(\sin (at) ) = a \cos (at)$$ We will get the Laplace transform of $a \cos (at)$. Recall that the Laplace transform of $\cos (\omega_o t)$ is $\frac{s}{s^2 + \omega_o ^2}$. Then the Laplace transform of $a \cos (at)$ would be: $$\mathcal L \{ \cos (\omega_o t) \} = \frac{s}{s^2 + \omega_o ^2}$$ $$\mathcal L \{ a \cos (at) \} = a \left( \frac{s}{s^2 + (a)^2} \right)$$ $$\mathcal L \{ a \cos (at) \} = a \left( \frac{s}{s^2 + a^2} \right)$$ We conclude that Laplace transform of $a \cos (at)$ is $\frac{a s}{s^2 + a^2}$. Hope it helps:-) fitzmerl duron hi Lee:-) ### Deafult Input • Allowed HTML tags: <img> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <sub> <sup> <blockquote> <ins> <del> <div> • Lines and paragraphs break automatically. • Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics. ### Plain text • No HTML tags allowed. • Lines and paragraphs break automatically.
• Transition metal-modified polyoxometalates supported on carbon as catalyst in 2-(methylthio)-benzothiazole sulfoxidation • # Fulltext https://www.ias.ac.in/article/fulltext/jcsc/127/01/0123-0132 • # Keywords Supported metal-substituted polyoxometalates; carbon; sulfoxidation; 2-(methylthio)-benzothiazole oxidation; hydrogen peroxide. • # Abstract Polyoxometalates with lacunary Keggin structure modified with transition metal ions [PW11O39M(H2O)]5−, where M = Ni2+, Co2+, Cu2+ or Zn2+, were synthesized and supported on activated carbon to obtain the PW11MC catalysts. Using FT-IR and DTA-TGA it was concluded that the [PW11O39M(H2O)]5− species are interacting with the functional groups of the support, and that thermal treatment leads to the loss of the coordinatively bonded water molecules without any noticeable anion degradation. The activity and selectivity of the catalysts in the sulfoxidation reaction of 2-(methylthio)-benzothiazole, an emerging environmental pollutant, were evaluated. The reaction was carried out in acetonitrile as solvent using H2O2 35% p/v as a clean oxidant. The conversion values decreased in the following order: PW11NiC &gt; PW11CuC &gt; PW11CoC &gt; PW11ZnC, with selectivity to sulfoxide higher than 69%. The catalyst could be reused without appreciable loss of the catalytic activity at least three times. The materials were found to be efficient and recyclable catalysts for 2-(methylthio)-benzothiazole sulfoxidation in order to obtain a more biodegradable product than the corresponding substrate. • # Author Affiliations 1. Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge J. Ronco” (CINDECA), CCT-LaPlata-CONICET, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 47 N° 257 (B1900AJK) La Plata, Argentina • # Journal of Chemical Sciences Volume 133, 2021 All articles Continuous Article Publishing mode • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
• anonymous Five times a number is the same as 30 more than 8 times the number. If n is "the number," which equation could be used to solve for the number? 3n = 30 5n + 8n = 30 5n = 8n + 30 Mathematics Looking for something else? Not the answer you are looking for? Search for more explanations.
TCAP Success Grade 3 MATH Chapter 5 ### TCAP Success Grade 3 MATH Chapter 5 Sample 1 pt 1. A piece of paper is divided into 6 equal pieces. Mark receives 3 of the pieces. What fraction of the whole piece of paper does Mark have? 1pt 2. Which whole number is equal to $\frac{4}{1}$? 1 pt 6. Find the missing fractions on the number line. 1 pt 10. Lila cut a piece of plaid cloth. The length of the piece is shown in the picture below. Which fractions are equivalent to the length of Lila’s piece of fabric? 1 pt 11. What fraction of this fraction model is shaded?
mersenneforum.org > Data Thinking out loud about getting under 20M unfactored exponents Register FAQ Search Today's Posts Mark Forums Read 2021-10-21, 08:17 #815 LaurV Romulan Interpreter     "name field" Jun 2011 Thailand 2×3×5×7×47 Posts Yay! Found a gem! Code: processing: P-1 factor 67384156745900300766317103355688421517561 for M11321579 (B1=6,189,107, B2=185,673,210) (135.630 bits) CPU credit is 8.9141 GHz-days. 2021-10-21, 16:56   #816 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 499710 Posts Quote: Originally Posted by LaurV Yay! Found a gem! Code: processing: P-1 factor 67384156745900300766317103355688421517561 for M11321579 (B1=6,189,107, B2=185,673,210) (135.630 bits) CPU credit is 8.9141 GHz-days. 2021-10-22, 01:49   #817 LaurV Romulan Interpreter "name field" Jun 2011 Thailand 232168 Posts Quote: Originally Posted by petrw1 When I started this project just over 4 years ago there were 498 ranges to go. Now with 166 ranges to go we are officially 2/3 complete. We just passed another milestone: Less than 333333 candidates left to test (that is, the total number of exponents in the remaining ranges, yesterday evening Thai time, was exact: 333000). Something to do with being half evil... Last fiddled with by LaurV on 2021-10-22 at 02:12 2021-10-22, 02:26   #818 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 116058 Posts Quote: Originally Posted by LaurV We just passed another milestone: Less than 333333 candidates left to test (that is, the total number of exponents in the remaining ranges, yesterday evening Thai time, was exact: 333000). Something to do with being half evil... When I started (July 2017) there were just under 1,050,000 to test. This makes us over 67% complete by this measure. Also when I started I needed 55,228 factors. As of now there are 9,471 remaining. 82.5% complete by this measure. There were 497 incomplete ranges. Now there are 158. 69.2% by this measure. However, all things considered the number 333,333 is the most interesting. 2021-10-23, 00:32 #819 LaurV Romulan Interpreter     "name field" Jun 2011 Thailand 100110100011102 Posts And one more... How does 179 bits sound? Unfortunately, not prime... Code: processing: P-1 no-factor for M11318603 (B1=6,189,107, B2=185,673,210) CPU credit is 8.9141 GHz-days. Splitting composite factor 735438206294405813130223741791584805574508614902245969 into: * 179701183696182915867337 * 4092561836085598728177371678537 processing: P-1 factor 179701183696182915867337 for M11318563 (B1=6,189,107, B2=185,673,210) (77.250 bits) CPU credit is 8.9141 GHz-days. processing: P-1 factor 4092561836085598728177371678537 for M11318563 (B1=6,189,107, B2=185,673,210) (101.691 bits) CPU credit is 8.9141 GHz-days. 2021-10-23, 05:42   #820 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 19·263 Posts Quote: Originally Posted by LaurV M113.... It's your compute but with the 1523 TF held by GPU72 in this range; expected to be completed next Friday; you do have the option to see if that completes M11.3 and move to another range in the mean time....11.2M ??? 2021-10-26, 05:37 #821 LaurV Romulan Interpreter     "name field" Jun 2011 Thailand 232168 Posts That is actually not a bad idea. The order I went for, in 11.3xM, was x=6,5,4,3,2,1,8,9,7,0, in order, i.e. I started with those sub-sub-ranges that had more-than-200 candidates and ended with those having fewer. The idea was that if I find enough factors in the first sub-sub-ranges, I won't need to bother with the last. And do a little step towards a possible utopian futuristic goal of having "less than 200 unfactored candidates in each sub-sub-range". Now, I already finished 6,5,4,3,2, and I am currently working on 1, all of these had more than 200 (some still have) candidates. Now, 8 still have 201 candidates, and the rest of the sub-sub-ranges have less than 200 candidates. There are only few candidates left in 1, but the progress is very slow, nothing like I was doing a month ago or so. I was thinking to take a break anyhow after finishing 1, or latest, after I found one factor in 8, then wait and see what gpu72 TF will provide. For how much P-1 was done here, I don't believe that the TF work will bring more than 10 factors or so, per bitlevel, and the next bitlevel is A LOT LESS efficient than P-1, at this size of the exponents (per wall clock, regardless of your GPU). So, I still think some P-1 will need to be done at the end to get the last 1-2-3-few factors. 2021-10-26, 05:58   #822 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 10011100001012 Posts Quote: Originally Posted by LaurV ... but the progress is very slow... I seem to recall you have a few tit-ans that would love to chew through P1 work 2021-10-26, 06:55   #823 LaurV Romulan Interpreter "name field" Jun 2011 Thailand 2×3×5×7×47 Posts Quote: Originally Posted by petrw1 I seem to recall you have a few tit-ans that would love to chew through P1 work Haha Still have 6 working Titans (5 classic and 1 black), plus 4x 580s (these are the old NV Fermi, not the much newer AMD 580s), plus 2x 1080Ti, pus 1x R6990 (or 7990, not sure), which don't do anything currently, they are boxed and shelved, because I have no rig to put them in, and running them doesn't really justify the electricity costs. Plus a lot of rubbish, about 20 other cards, including 7 more Titans from which I took components to repair the 6 working cards (totally 13 Titans, including working and rubbish). Some day... :hope: Last fiddled with by LaurV on 2021-10-26 at 06:57 Reason: s/to rig/no rig/ 2021-10-26, 08:30   #824 Luminescence Oct 2021 Germany 1010112 Posts Quote: Originally Posted by LaurV Haha Still have 6 working Titans (5 classic and 1 black)… Woah, 6 Titans. How is the throughput with those six? I have two RX 6900 XTs (only one hooked up) and a run with 1M/30M in the 2xM range takes 10 minutes. I am still trying to fit my second one in the case… they are huge cards 2021-10-26, 15:38   #825 petrw1 1976 Toyota Corona years forever! "Wayne" Nov 2006 19×263 Posts Quote: Originally Posted by Luminescence I have two RX 6900 XTs (only one hooked up) and a run with 1M/30M in the 2xM range takes 10 minutes. That explains how quickly you are clearing the ranges....I need one of those....or two Similar Threads Thread Thread Starter Forum Replies Last Post jschwar313 GPU to 72 3 2016-01-31 00:50 Batalov Factoring 6 2011-12-27 22:40 jasong jasong 1 2008-11-11 09:43 devarajkandadai Math 4 2007-07-25 03:01 WraithX GMP-ECM 1 2006-03-19 22:16 All times are UTC. The time now is 04:16. Tue Jan 25 04:16:29 UTC 2022 up 185 days, 22:45, 0 users, load averages: 2.35, 1.77, 1.54
# Are question and answer scores correlated with post length? Recently David Z said in chat that It's good to keep in mind that people are less likely to read a longer answer. So I was wondering: How true is it that length can work against you in an answer? While we can't directly measure the number of users who do read individual answers, surely there is some data we can dig up. I'll post my own findings. Everyone else should feel free to contradict me with better statistics (not a hard thing to do). • Of course, one can't necessarily assume that likelihood of being read correlates with vote count. ;-) – David Z Jul 24 '14 at 9:46 • It would also be interesting to replace in the investigations below the length of the post by some measure for the "amount of LaTex"... – Dilaton Jul 24 '14 at 10:12 • @Dilaton count '$' characters perhaps? (and also '[', '('... a bit tricky to do). – Kyle Oman Jul 24 '14 at 17:04 • @Kyle One would use regular expressions in the statement to count the number of characters in sections extracted between$ markers and markers. I don't know enough SQL or regex to do it myself though. – tpg2114 Jul 24 '14 at 17:11 • Err tried to say '\[' rather than '[' and '\(' rather than '('. Apparently I need to escape my backslashes. – Kyle Oman Jul 24 '14 at 17:14 • @tpg2114 and then there are those like me who regularly use the little-known fact that \begin{align}...\end{align} works in mathjax without signs. – user10851 Jul 24 '14 at 17:15 • Regex involving backslashes shiver. – Kyle Oman Jul 24 '14 at 17:18 # Raw Scores First, here are some queries asking how questions and answers score as a function of their lengths (in raw input characters, no sophisticated parsing here), with the associated graphs replicated here for convenience: In both cases the mean score in each bin is shown together with +1-sigma and -1-sigma scores. There might be a slight upward trend for answers, but there is not really any trend to speak of for questions. # Normalized Scores Perhaps a better measure is (score) / (page views). At least this way we are closer to answering the question "of those who looked at this question/answer, what fraction thought it was good?" Voters don't seem to care much about a post's length. # Accepted vs. Not Accepted Answers Could there be a difference in voting patterns as a function of post length between these two populations of answers? It's a busy plot, but I don't see a trend in either population. # Conclusion So, at least in this rough analysis, it seems post length really doesn't matter when it comes to votes. Of course, some of my answers I'm most proud of are rather long expositions on rich topics, so I'm not saying we should all write one-sentence answers to maximize votes per unit effort. Finally, I leave this last graph that has nothing to do with post length. It's not really surprising; it shows that answer score is correlated with (the absolute value of) question score. • If anyone spots any egregious error in my SQL that invalidates these queries, please point out my shortcomings. – user10851 Jul 24 '14 at 6:58 • I'm curious about your Answer score as a function of length plot, which could be interpreted as saying that there's some 'critical spot' at around 10k chars where answer ratings can be much better or much worse than average. What's the shape of the distribution around that point? Does it "grow a bump upwards"? Or is it simply broader? (It's unlikely, for instance, for answers with score <-10 to last long without being automatically deleted.) – Emilio Pisanty Jul 24 '14 at 10:54 • Chris, this SEDE query is an attempt at doing the quartiles. What are your thoughts? – Emilio Pisanty Jul 24 '14 at 12:33 • How does your answer score as a function of length chart incorporate number of answers of that length? If it simply averages over the number of answers in that bin, that might explain why there's a dramatic increase after 18000; very few answers of that length so the average can easily be offset by one particularly high-voted answer – Jim Jul 24 '14 at 14:04 • @EmilioPisanty I like the quartiles a lot better than standard deviations for such non-Gaussian data. I also would never have been able to figure out how to write that query. – user10851 Jul 24 '14 at 16:20 • @ChrisWhite Yeah, it was a lot harder than I thought it was going to be. (Usually is, too.) And yeah, the data is not gaussian at all. – Emilio Pisanty Jul 24 '14 at 16:26 • It's also notable that the 'big-hit' answers (i.e. the top of the fourth quartile) is very high for shorter answers. – Emilio Pisanty Jul 24 '14 at 16:30 • +1. Using the standard equation "1 picture = 1000 words", this is a 6343 word long answer, and it's a pretty good answer. – David Hammen Aug 4 '14 at 20:28
# 2012-08-03 Free French OCR for Mac Difference between revision 5 and current revision Summary: http://([a-z09.]*)flickr → https://$1flickr No diff available. Once again I’ve decided I needed to work on the memoirs of my grandfather Roland Li-Marchetti. I had digitized 16 pages many years ago but his memoirs contain a total of 45 pages of typed text. The last time I worked on this, I was using the OCR software that came with my scanner (a cheap Canon !LiDE 25) – but today the scanner was no longer recognized by the operating system. I faintly remember having experienced this before when I upgraded my system. Bit rot! Anyway, I was in the mood to try something new. Free Software? 1. Tesseract 2. requires Leptonica 3. and I needed to install GNU Libtool because I was getting an error: “Libtool library used but LIBTOOL’ is undefined. The usual way to define LIBTOOL’ is to add AC_PROG_LIBTOOL’ to configure.ac’ and run aclocal’ and autoconf’ again. If AC_PROG_LIBTOOL’ is in configure.ac’, make sure its definition is in aclocal’s search path.” (While the stuff is compiling, I am in fact using a free online OCR service.) Here is the original, taken with my Pentax K100D, loaded into Gimp, rotated, cropped, and auto-adjusted levels. The tesseract output is pretty cool: que mes vingt prochaines années soient aussi riches d'aventures et de bonheur auprès des miens, main dans la main avec Agnès mon inséparable complice qui a beaucoup sacrifié et que j'espère pouvoir encore rendre heureuse. (When I tried it on a direct photo of the page the result was far less pleasing.) Yay! for ((i=20; i<=46; i++)); do tesseract IMGP$((5210+$i)).JPG "page-$i" -l fra done Tags: Last edit Changed: < -- AlexSchroeder 2012-08-03 14:09 UTC to > -- [[AlexSchroeder]] 2012-08-03 14:09 UTC Nice, there are multiple ring binders of my grandfather’s memoirs as well. One day I should digitize them too. BTW. you might want to have a look for other OCR solutions (I guess most of what I’ve written there would apply to Mac as well). Andreas Gohr 2012-08-03 13:09 UTC Excellent! Thank you very much. I feel relieved that I seem to have picked the best free option. AlexSchroeder 2012-08-03 14:09 UTC Please make sure you contribute only your own work, or work licensed under the GNU Free Documentation License. See Info for text formatting rules. You can edit the comment page if you need to fix typos. You can subscribe to new comments by email without leaving a comment.
[texhax] Can I invoke some tex command at commank like this? E. Krishnan ekmath at md5.vsnl.net.in Fri Sep 26 21:49:17 CEST 2003 On Fri, 26 Sep 2003, Chakrit Nimmanant wrote: > As you can see, TeX is very useful to determine > the dimension of my "Evam". So,I want to invoke some > TeX commands on standard input which can receive > "Evam" as a parameter and returns a value of \wd0 to > my standard output. This is how I'd go about it in LaTeX: Make a file named boxdim.tex contining the lines \documentclass{article} \begin{document} \newlength{\boxwd} \newlength{\boxht} \newlength{\boxdp} \typeout{} \typein[\boxname]{Please type the box content} \typeout{} \settowidth{\boxwd}{\boxname} \settowidth{\boxht}{\boxname} \settodepth{\boxdp}{\boxname} \typeout{} \typeout{These are the dimensions of the box} \typeout{} \typeout{width = \the\boxwd} \typeout{} \typeout{height = \the\boxht} \typeout{} \typeout{depth = \the\boxdp} \typeout{} \end{document} and compile the file with LaTeX. (The empty "typeouts" are just cosmetic.) -- Krishnan
# Evaluating the Stolt stretch parameter Sergey Fomel sergey@sep.stanford.edu # ABSTRACT The Stolt migration extension to a varying velocity case (Stolt stretch) implies describing a vertical heterogeneity by a constant parameter (W). This paper exploits the connection between modified dispersion relations and traveltime approximations to derive an explicit expression for W. The expression provides theoretically the highest possible accuracy within the Stolt stretch framework. Applications considered include optimal partitioning of the velocity distribution for the cascaded migrations and extension of the Stolt stretch method to transversally isotropic models. Stolt migration is regarded as the fastest post-stack migration method of all the known algorithms. A known price for that speed is the constant velocity assumption. The time-stretching trick proposed in Stolt's classic paper provides an approximate extension of the method to a variable velocity case. Stolt stretch implicitly transforms reflection traveltime curves to fit an approximate constant velocity pattern (, , ). In other words, the wave equation with variable velocity is transformed by a particular stretch of the time axis to an approximate differential equation with constant coefficients. The two constant coefficients are an arbitrarily chosen frame velocity and a specific nondimensional parameter (W in Stolt's original notation). In the constant velocity case W is equal to 1, and the transformed equation coincides with the exact constant velocity wave equation. In variable velocity media, W is generally assumed to lie between and 1. As shown by Beasley et al. , the cascaded f-k migration approach can move the value of W for each migration in a cascade closer to 1, thus increasing the accuracy of the Stolt stretch approximation. The W factor was defined by Stolt as an approximate average of a complicated function. Stolt's definition cannot be used directly for computation because it includes a combined dependence on both time and space coordinates. Therefore, in practice, the estimation of this factor is always replaced by a heuristic guess. That's why Levin called the W parameter infamous'' (joking, of course), and Beasley et al. called it it esoteric.'' This paper develops a method to evaluate the Stolt stretch parameter explicitly. The main idea is to constrain the parameter by fitting the exact and approximated traveltime functions. In the case of isotropic interpretation, the W parameter is connected to the parameter of heterogeneity'' (, , ). In the case of anisotropic (transversally isotropic) interpretation, it can be related to the parameter of anellipticity'' (, ). STOLT STRETCH THEORY In order to simplify the references, I will begin with the textbook definitions of the Stolt migration method. The reader familiar with Stolt stretch theory can skip this section and go on to a new piece of theory in the next one. Post-stack seismic migration is theoretically a two-stage process consisting of wavefield downward continuation in depth z based on the wave equation (1) and the imaging condition t=0 (here the velocity v is twice as small as the actual wave velocity). Stolt time migration performs both stages in one step, applying the frequency-domain operator (2) where stands for the initial zero-offset (stacked) seismic section defined on the surface z=0, is the time-migrated section, and tv is the vertical traveltime (3) The function in stolt corresponds to the dispersion relation of the wave equation wave and in the constant velocity case has the explicit expression (4) The choice of the sign in dispersion is essential to distinguish between upgoing and downgoing waves. It is the upgoing part of the wave field that is used in migration. For the case of a varying velocity Stolt suggested the following change of the time variable (referred to in the literature as Stolt stretch): (5) where v0 is an arbitrarily chosen constant velocity, and is a function defined by the parametric expressions (6) With the stretch ss, seismic time migration can be related to the transformed wave equation (7) Here and are the transformed depth and time coordinates that possess the following property: if , , and if , . W is a varying coefficient defined as (8) where Stolt's idea was to replace the slowly varying parameter W with its average value. Thus equation swave is approximated by an equation with constant coefficients, which has the dispersion relation (9) Stolt's approximate method for migration in heterogeneous media consists of the following steps: 1. stretching the time variable according to ss, 2. interpolating the stretched time to a regular grid, 3. double Fourier transform, 4. f-k time migration by operator stolt with the dispersion relation sdispersion, 5. inverse Fourier transform, 6. inverse stretching (shrinking) the vertical time variable on the migrated section. The value of W must be chosen prior to migration. According to Stolt's original definition wstolt, the depth variable z gradually changes in the migration process from zero to , causing the coefficient b in wstolt to change monotonically from 0 to 1. If the velocity v monotonically increases with depth, then , and the average value of b is (10) As follows from wstolt and bishalf, in the case of monotonically increasing velocity, the average value of W has to be less than 1 (W equals 1 in a constant velocity case). Analogously, in the case of a monotonically decreasing velocity, W is always greater than 1. In practice, W is included in migration routines as a user-defined parameter, and its value is usually chosen to be somewhere in the range of 1/2 to 1. In this paper I will describe a straightforward way to determine the most appropriate value of W for a given velocity distribution. A useful tool for that purpose is Stewart Levin's formula for the traveltime curve. Levin applied the stationary phase technique to the dispersion relation sdispersion to obtain an explicit formula for the summation curve of the integral migration operator analogous to the Stolt stretch migration. The formula evaluates the summation path in the stretched coordinate system, as follows: (11) Here x0 is the midpoint location on a zero-offset seismic section, and x is the space coordinate on the migrated section. Formula levin shows that, with the stretch of the time coordinate, the summation curve has the shape of a hyperbola with the apex at and the center (the intersection of the asymptotes) at . In the case of homogeneous media, W=1, , and levin reduces to the well-known hyperbolic diffraction traveltime curve. It is interesting to note that inverting formula levin for determines the impulse response of the migration operator, which can be interpreted as the wavefront from a point source in the domain of equation swave: (12) where , and Q=2-W. According to equation sfront, wavefronts from a point source in the stretched coordinates for W<2 have an elliptic shape, with the center of the ellipse at and the semi-axes and . The ellipses stretch differently for W<1 and W>1 (Figure stofro). In the upper part that corresponds to the upgoing waves, they look nearly spherical, since the radius of the front curvature at the top apex equals the distance from the source. stofro Figure 1 Wavefronts from a point source in the stretched coordinate system. Left: velocity decreases with depth (W=1.5). Right: velocity increases with depth (W=0.5). EVALUATING THE W PARAMETER AND STOLT STRETCH ACCURACY Formula levin belongs to the three-parameter class of traveltime approximations. The key result of this paper uses a remarkable formal similarity between levin and Malovichko's approximation for the reflection traveltime curve in vertically inhomogeneous media (, , ) defined by (13) where vrms stands for the effective (root mean square) velocity along the vertical ray (14) and S is the parameter of heterogeneity: (15) In terms of the S parameter, the variance of the squared velocity distribution along the vertical ray is (16) As follows from equality sigma, for any type of velocity distribution (S equals 1 in a constant velocity case). For most of the distributions occurring in practice, S ranges between 1 and 2. Malovichko's formula ussr is known as the most accurate three-parameter approximation of the NMO curve in vertically inhomogeneous media. Since reflection from a horizontal reflector in that class of media is kinematically equivalent to diffraction from a point, formula ussr can be similarly regarded as an approximation of the summation path of the post-stack Kirchhoff-type migration operator. In this case, it has the same meaning as formula levin. An important difference between the two formulae is the fact that equation ussr is written in the initial coordinate system and includes coefficients varying with depth, while equation levin applies the transformed coordinate system and constant coefficients. Using this fact, the rest of this section compares the accuracy of the approximations and relates Stolt's W factor to Malovichko's parameter of heterogeneity. Equations levin and ussr both approximate the traveltime curve in the neighborhood of the vertical ray. Therefore, to compare their accuracy, it is appropriate to consider series expansion of the diffraction traveltime in the vicinity of the vertical ray: (17) where l=x-x0. Expansion taylor contains only even powers of l because of the obvious symmetry of t0 as a function of l. The special choice of parameters tv, vrms, and S allows Malovichko's formula ussr to provide correct values for the first three terms of expansion taylor: (18) (19) (20) Considering Levin's formula levin as an implicit definition of the function , we can iteratively differentiate it following the rules of calculus: (21) (22) Substituting the definition of Stolt stretch transform ss into lev2 produces an equality similar to mal2, which means that approximation levin is theoretically accurate in depth-varying velocity media up to the second term in taylor. It is this remarkable property that proves the validity of the Stolt stretch method (, ). Formula levin will be accurate up to the third term if the value of the fourth-order traveltime derivative in lev4 coincides with mal4. Substituting equation mal4 into lev4 transforms the latter to the form (23) It is now easy to derive from equation malvsst the desired explicit expression for the Stolt stretch parameter W, as follows: (24) Expression main is derived so as to provide the best possible value of W for a given depth (vertical time tv). To get a constant value for a range of depths one should take an average of the right hand side of main in that range. The error associated with Stolt stretch can be approximately estimated from taylor as the difference between the fourth-order terms: (25) where is the right-hand side of main, and W is the constant value of W chosen for Stolt migration. To estimate the best possible accuracy that the Stolt stretch method can achieve, we must take into account the sixth-order term in taylor related to the sixth-order derivative of the traveltime curve. For the true traveltime curve, the expression for the sixth-order derivative in the vicinity of the vertical ray is known from the literature (, ) to be (26) First, let us estimate the error of Malovichko's approximation ussr. Differentiating ussr six times and setting the offset l to zero yields (27) The estimated error is proportional to the difference between mal6 and true6: (28) It is interesting to note that replacing the parameter of heterogeneity S by its definition heter changes the expression in the round brackets to the following form: (29) According to the Schwarz inequality from calculus ( also known as the Cauchy-Bunyakovski inequality), the value of expression Cauchy can never be less than zero; hence for any velocity distribution. This conclusion indicates that Malovichko's approximation tends to increase the traveltime at large offsets beyond its true value. Differentiating lev4 twice and eliminating terms that vanish at l=0 produces (30) Evaluating the sixth-order traveltime derivative from lev6 and subtracting true6, we get a somewhat lengthy but explicit expression for the error associated with Stolt stretch approximation in the case of the best possible choice of W: (31) ISOTROPIC HETEROGENEITY VERSUS ANELLIPTIC ANISOTROPY A controversial issue associated with the topic of this paper is whether the non-hyperbolicity of the traveltime curves is caused mainly by heterogeneity or by anisotropy. To find a connection between the two different descriptions of media, we can consider an alternative three-parameter traveltime approximation (the anelliptic anisotropic moveout formula), proposed by Muir and Dellinger : (32) Here f is the parameter of anellipticity. Differentiating muir four times, setting l=x-x0 to zero, and equating the result with mal4 results in the following formal relationship between f and Malovichko's parameter of heterogeneity: (33) Equation Svsf clearly demonstrates the uncertainty between the anisotropic and heterogeneous isotropic interpretations. Both of them can explain the cause of the nonhyperbolicity of traveltime curves. An important difference is that the parameter of heterogeneity is uniquely determined by the velocity distribution according to heter, while the f parameter is assumed to be an independent functional. The definition heter, applied in combination with main, is suitable for calculating the Stolt stretch factor in an isotropic model for a given velocity function. If the correction parameter is measured experimentally by a non-hyperbolic velocity analysis in the form of either equation ussr or equation muir, it accumulates both heterogeneous and anisotropic factors and can be used for an explicit determination of W in main independently of the preferred explanation. In the case of the anisotropic moveout velocity analysis, we merely need to substitute the connection formula Svsf into main to find W. An alternative approach to Stolt-type migration in transversally isotropic media was proposed recently by Ecker and Muir . However, Stolt stretch migration is superior to that method in its ability to cope with varying rms velocities. EXAMPLES A simple analytic example of isotropic heterogeneity is the case of a constant velocity gradient. In this case the velocity distribution can be described by the linear function . Stolt stretch transform is found from ss as (34) Let be the logarithm of the velocity change v(z)/v(0). Then an explicit expression for W factor follows from main: (35) For (a small depth or a small velocity gradient), . For (a large positive change of velocity) W monotonically approaches zero. Formula wvz can be a useful rule of thumb for a rough estimation of W. Numerical example of the Stolt stretch parameter computation is illustrated in Figures stovwt and stocvw. The left side of Figure stovwt shows a smoothed interval velocity curve from the Gulf of Mexico. The corresponding optimal values of the W factor as a function of vertical time (in the isotropic model) are shown on the right. Though the velocity function is smooth, substantial changes in W occur, making its mean value for the times sec equal to 0.631. The theory of cascaded migrations (, ) proves that Stolt-type f-k migration for a nonuniform velocity can be performed as a cascaded process consisting of migrations with the smaller velocities , such that . As shown by Larner and Beasley , it is important to partition the velocity so that for each particular tv all the velocities in the cascade, except maybe the last one, are constant. The advantage of the cascaded f-k migration method is based on the fact that each small velocity vi describes a more homogeneous medium than the initial function. Therefore, the W factor for each migration in a cascade is closer to 1, and the Stolt stretch approximation is more accurate. This fact is illustrated in Figure stocvw, which shows an optimal partitioning of the velocity and the corresponding values of the W factor. In accordance with the empirical conclusions of Beasley et al. , a cascade of only four migrations was sufficient to increase the value of W to more than 0.8. With a further increase of the number of cascaded migrations, the method becomes as accurate with respect to vertical velocity variations as phase-shift migration. Theoretically, this limit corresponds to the velocity continuation concept (). Note that the theory of cascaded f-k migration is strictly valid for isotropic models. The anisotropic interpretation does not support it, since the intrinsic anisotropy factor is not supposed to change with the velocity partitioning. stovwt Figure 2 Smoothed interval velocity distribution from the Gulf of Mexico (left) and the corresponding W factor as a function of vertical time (right). The mean value of W is 0.631. stocvw Figure 3 Left: Optimal partitioning of the velocity function for the method of cascaded migrations. Right: corresponding mean values of W. Top: four cascaded migrations. Bottom: seven cascaded migrations. The main result of this paper is an analytic explicit expression main that allows us to choose the most appropriate value for the Stolt stretch factor. Possible applications include the optimal design of interval velocities partitioning for the method of cascaded f-k migrations and extension of the Stolt stretch method to a transversally isotropic model. Nowadays the topic of this paper seems to be out of fashion. When everyone is interested in prestack depth migration in the time-space domain, it is difficult to attract any attention to post-stack time migration in the frequency domain. Nevertheless, I believe the art of approximation demonstrated by Robert Stolt in his famous paper to be a good example to follow when working on many different problems, which was the main reason for this research. [SEP,MISC,GEOTLE,paper,SEGCON]
# Cancelling magnetic fields# Two long wires, one of which has a semicircular bend of radius $$R$$, are positioned as shown in the figure. ## Part 1# If both wires carry a current $$I$$, how far apart must their parallel sections be so that the net magnetic field at P is zero? In your symbolic expression, use pi to represent π. ## Part 2# What is the direction of the current in the straight wire on the right-hand side?
# The Expectation in terms of probability let the independent random variables $X$ and $Y$ takes values in the range $\{1,...,n\}$ • Calculate the Expectation of $X$ in terms of $P(X \geqslant k)$. Indeed, note that $$\{X=k\}=\{X\geq k\}\cup \{X<k \}$$ then $$P\{X=k\}=P\{X\geq k\}+P\{X<k \}$$ $$P\{X=k\}=P\{X\geq k\}+1-P\{X \geq k+1\}$$ $$E[x]=\sum_{k=1}^{n}kP\{X\geq k\}+\sum_{k=1}^{n}k1-\sum_{k=1}^{n}kP\{X \geq k+1\}$$ • What you wrote doesn't make any sense at all. Suppose $n = 3$ and furthermore, suppose $\Pr[X = 1] = \Pr[X = 2] = \Pr[X = 3] = \frac{1}{3}$. Then $\Pr[X \ge 2] = \frac{2}{3}$ and $\Pr[X < 2] = \frac{1}{3}$, and $\Pr[X = 2] \ne \Pr[X \ge 2] + \Pr[X < 2]$. – heropup Jun 20 '14 at 6:15 \begin{align} p(1)+p(2)+p(3)+&... =P(X\ge 1)\\ p(2)+p(3)+&...=P(X\ge 2) \\ p(3)+&...=P(X\ge 3) \\ \end{align}\\ etc.
# CURRENT WORK ## Non-Universality in Computation: The Myth of the Universal Computer Contributor:Selim Akl For a long time people believed that the Sun orbits our Earth. We now know better. And so it is with universality in computation. Consider the following statement: It can also be shown that any computation that can be performed on a modern-day digital computer can be described by means of a Turing machine. Thus if one ever found a procedure that fitted the intuitive notions, but could not be described by means of a Turing machine, it would indeed be of an unusual nature since it could not possibly be programmed for any existing computer.'' (J.E. Hopcroft and J.D. Ullman, Formal Languages and their Relations to Automata, Addison-Wesley, Reading, Massachusetts, 1969, p. 80.) The first sentence in the above quote is clearly false, and well known counterexamples abound in the literature. The second sentence, on the other hand, is only half true. Indeed, it is perfectly justified to say that a computation that cannot be performed on a Turing Machine must be of an unusual nature. However, this does not mean that such a computation cannot be programmed on any existing computer. As shown in what follows, the existence of such a computation would instead mean that the Turing Machine is simply not universal and that the Church-Turing Thesis (whose validity is assumed implicitly in the above quote) is in fact invalid. I have recently shown that the concept of a Universal Computer cannot be realized. Specifically, I exhibited instances of a computable function F that cannot be computed on any machine U that is capable of only a finite and fixed number of operations per step (or time unit). This remains true even if the machine U is endowed with an infinite memory and the ability to communicate with the outside world while it is attempting to compute F. It also holds if, in addition, U is given an indefinite amount of time to compute F. The most general statement of my result is as follows: Non-universality in computation: Given n spatially and temporally connected physical variables, X1, X2, ..., Xn, where n is a positive integer, and a function F(X1, X2, ..., Xn) of these variables, no computer can evaluate F for any arbitrary n, unless it is capable of an infinite number of operations per time unit. Note that F is readily computable by a machine M capable of exactly n operations per time unit. However, this machine cannot compute F when the number of variables is n+1. While a second machine M' capable of n+1 operations per time unit can now compute the function F of n+1 variables, M' is in turn defeated by a function of n+2 variables. This continues forever. This point deserves emphasis. While the function F(X1, X2, ..., Xn+1) = F1(X1), F2(X2), ..., Fn+1(Xn+1) is easily computed by M', it cannot be computed by M. Even if given infinite amounts of time and space, machine M is incapable of simulating the actions of M'. Furthermore, machine M' is in turn thwarted by F(X1, X2, ..., Xn+2), a function computable by a third machine M''. This continues indefinitely. Therefore no computer is universal if it is capable of exactly T(i) operations during time unit i, where i is a positive integer, and T(i) is finite and fixed once and for all (for it will be faced with a computation requiring V(i) operations during time unit i, where V(i) > T(i) for all i). Examples of such function F occur in: 1. Computations with time-varying variables: The variables, over which the function is to be computed, are themselves changing with time. 2. Computations with time-varying computational complexity: The computational complexity of the function to be computed is itself changing with time. 3. Computations with rank-varying computational complexity: Given several functions to be computed, and a schedule for computing them, the computational complexity of a function depends on its position in the schedule. 4. Computations with interacting variables: The variables of the function to be computed are parameters of a physical system that interact unpredictably when the system is disturbed. 5. Computations with global mathematical constraints: The function to be computed is over a system whose variables must collectively obey a mathematical condition at all times. 6. Computations with uncertain time constraints: There is uncertainty with regards to the input (when and for how long are the input data available), the calculation (what to do and when to do it), and the output (the deadlines are undefined at the outset); furthermore, the function that resolves each of these uncertainties itself has demanding time requirements. For instance, suppose that the Xi are themselves functions that vary with time. It is therefore appropriate to write the n variables as X1(t), X2(t), ..., Xn(t), that is, as functions of the time variable t. Further assume that, while it is known that the Xi change with time, the actual functions that effect these changes are not known (for example, Xi may be a true random variable). The problem calls for computing Fi(Xi(t)), for i = 1, 2, ..., n, at time t = t0, where each Fi is a simple function of one variable that takes one time unit to compute. Specifically, let Fi(Xi(t)) simply represent the reading of Xi(t) from an external medium. The fact that Xi(t) changes with the passage of time means that, for k > 0, not only is each value Xi(t0 + k) different from Xi(t0), but also the latter cannot be obtained from the former. No computer capable of fewer than n read operations per time unit can solve this problem. We note in passing that the above example is deliberately simple in order to convey the idea. Reading a datum, that is, acquiring it from an external medium, is the most elementary form of information processing. Any computer must be able to perform such operation. This simplest of counterexamples suffices to establish non-universality in computation. Of course, if one wishes, the computation can be made more complex, at will. While our main conclusion remains unchanged, for some, a more complex argument may sound more convincing. Thus, for example, we may add arithmetic by requiring that Fi(Xi(t)) call for reading Xi(t) and incrementing it by one, for i = 1, 2, ..., n, at time t = t0. Reading Xi(t), incrementing it by one, and returning the result takes one time unit. In any case, a computer M capable of n operations per time unit (for example, one with n processors operating in parallel) can compute all the Fi(Xi(t)) at t = t0 successfully. A computer capable of fewer than n operations per time unit fails to compute all the Fi(Xi(t)) at t = t0. Indeed, suppose that a computer is capable of n-1 operations per time unit. It would compute n-1 of the Fi(Xi(t)) at t = t0 correctly. Without loss of generality, assume that it computes Fi(Xi(t)) at t = t0 for i = 1, 2, ..., n-1. Now one time unit would have passed, and when it attempts to compute Fn(Xn(t0)), it would be forced to incorrectly compute Fn(Xn(t0 + 1)). Is computer M universal? Certainly not. For when the number of variables is n+1, M fails to perform the above computation. As stated above, a succession of machines M', M'', ..., each succeeds at one level only to be foiled at the next. The implication to the theory of computation is akin to that of Gödel's incompleteness theorem to mathematics. In the same way as no finite set of axioms Ai can be complete, no computer Ui is universal that can perform a finite and fixed number of operations per time unit. This is illustrated below: For every set of axioms Ai there exists a statement Gi not provable in Ai, but provable in Ai+1; similarly, for every machine Ui there is a problem Pi not solvable on Ui, but solvable on Ui+1. This result applies not only to idealized models of computation, such as the Turing Machine, the Random Access Machine, and the like, but also to all known general-purpose computers, including existing conventional computers (both sequential and parallel), as well as contemplated unconventional ones such as biological and quantum computers. It is true for computers that interact with the outside world to read input and return output (unlike the Turing Machine, but like every realistic general-purpose computer). It is also valid for computers that are given unlimited amounts of time and space to perform their computations (like the Turing Machine, but unlike realistic computers). Even accelerating machines that increase their speed at every step (such as doubling it, or squaring it, or any such fixed acceleration) at a rate that is set in advance, cannot be universal! The only constraint that we have placed on the computer (or model of computation) that claims to be universal is that the number of operations of which it is capable per time unit be finite and fixed once and for all. In this regard, it is important to note that: 1. The requirement that the number of operations per time unit, or step, be finite is necessary for any "reasonable" model of computation [see, for example, M. Sipser, Introduction to the Theory of Computation, PWS Publishing Company, Boston, Massachusetts, 1997, p. 141]; 2. The requirement that this number be fixed once and for all is necessary for any model that purports to be "universal" [see, for example, D. Deutsch, The Fabric of Reality, Penguin Books, London, England, 1997, p. 210]. Without these two requirements, the theory of computation in general, and the theory of algorithms, in particular, would be totally irrelevant. The consequences to theoretical and practical computing are significant. Thus the conjectured "Church-Turing Thesis" is false. It is no longer true that, given enough time and space, any single general-purpose computer, defined a priori, can perform all computations that are possible on all other computers. Not the Turing Machine, not your laptop, not the most powerful of supercomputers. In view of the computational problems mentioned above (and detailed in the papers below), the only possible universal computer would be one capable of an infinite number of operations per time unit. In fact, this work has led to the discovery of computations that can be performed on a quantum computer but not on any classical machine (even one with infinite resources), thus showing for the first time that the class of problems solvable by classical means is a true subset of the class of problems solvable by quantum means. Consequently, the only possible universal computer would have to be quantum (as well as being capable of an infinite number of operations per time unit). For details, please see the references. ## Misconceptions, Criticisms, Replies, and a Computational Challenge "Almost all the other fellows do not look from the facts to the theory but from the theory to the facts; they cannot get out of the network of already accepted concepts; instead, comically, they only wriggle about inside." Albert Einstein in a letter to Erwin Schrödinger, August 8, 1935, quoted in The Age of Entanglement by Louisa Gilder, Vintage Books, 2009, p. 170. What follows are some misconceptions and criticisms relating to my non-universality result, and responses to them. They are presented as a dialog with an interlocutor who, I assume, has read my papers listed below on the myth of universal computation. The presentation concludes with a challenge. Misconception 1: "So, you describe a number of functions that are uncomputable. What's new about that? Uncomputable functions have been known since the time of Turing." Response: This is incorrect. The functions that I describe are eminently computable. In my papers listed below, every function F of n variables can be easily evaluated by a computer capable of at least n elementary operations per time unit (an n-processor parallel computer, for example). However, a computer capable of only n-1 or fewer elementary operations per time unit cannot compute F. Non-universality in computation immediately follows by simple induction. Misconception 2: "You are proposing computations that cannot be simulated efficiently. What's new about that? Your own book Parallel Computation: Models and Methods (Prentice Hall 1997) and your earlier papers presented such computations that can be performed in constant time by n processors, but require time exponential in n when executed on n-1 or fewer processors." Response: The error in the statement above is in the phrase "cannot be simulated efficiently". Indeed, my non-universality result does not follow from computations that cannot be simulated efficiently. It follows from computations that cannot be simulated at all. Thus, for each of the functions F of n variables that I describe in my papers listed below, no computer capable of fewer than n elementary operations per time unit can simulate the actions of a computer capable of n elementary operations per time unit. The latter computer is capable of evaluating F successfully, the former is not capable of doing so, even if given an infinite amount of time and memory space. It is this impossibility of simulation that leads to non-universality. Misconception 3: "What do you think about this way of attacking your time-varying variables problem: First of all, we should separate the tasks into sensing and computation tasks. That means, I assume that there are sensors equipped with memory which read the values of the variables at time t and write them into appropriate memory locations. So, for every t we have memory locations x_1(t), ..., x_n(t). Furthermore, assume that the values of n and t are stored in some other variables. Then, a universal machine would read the value of n (and t), read the values from memory and perform the requested computations on these values, which is a rather simple task. So, the main argument of this is that we allow for this separation. The sensors are peripheral components, which do not perform any computations, but "just" provide the input for the computation, i.e. they sense their values simultaneously at time t and store them in their respective memory locations, and the computations can access these values later on. I guess that you will argue that this requires n values to be stored in the memory of the machine at the same time which would contradict a specification of a machine which is independent of n. But if we can reduce the problem you posed to this separation of concerns, we would have the consistence with the traditional theory of computation except for the addition of these sensors, which are not considered to be a part of the universal computing device but of the input specification." Response: Surely, you cannot "separate" part of the universal computer (in this case the input unit) from the rest of the computer just to fit the problem. The universal computer is one unit, and a computational step is: [read, calculate, write]. The definition of to compute' as the process of transforming information' applies to all three phases of computation: 1. The input phase, where data such as keystrokes, mouse clicks, or temperatures are reduced, for example, to binary strings; 2. The calculation phase, where data are manipulated through arithmetic and logic operations, as well as operations on strings, and so on; and 3. The output phase, where data are produced as numbers on a screen, or rendered as images and sounds, for instance. Each of the three phases represents information transformation, is an integral part of every computation, and no computation worthy of that name can exclude any of them. In particular, input and output are fundamental in the design and analysis of algorithms. Input-sensitive and output-sensitive computations often play an especially important role in deriving lower and upper bounds on algorithmic complexity. One of the most dramatic illustrations of the interweaving of input and output with calculation is the well-known linear-time planarity testing algorithm (J. Hopcroft and R. Tarjan, Efficient planarity testing, Journal of the ACM, Vol. 21, No. 4, 1974, pp. 549--568.) In order to determine whether a graph with V vertices is planar, Step 1 of that algorithm avoids a potentially quadratic-time computation, by reading in the edges of the graph one at a time from the outside world; if there is one more edge than 3V-6, the graph is declared non-planar. But let's assume you have set up n sensors and succeeded in solving the problem. What happens when you discover, the next morning, that there are now, not n, but n+1 inputs? Do you think it is a fair solution for you to go to the sensor shop, buy one more sensor and rush back to attach it to the extra input source and to the computer? And even if you did, what if by the time you return, the time T at which the result is needed would have passed? Misconception 4: "Your argument is great. It is simple and effective. However, it doesn't show that there is something theoretically flawed with the concept of a universal computer; it shows that a universal computer could never be physically realized. So the Church-Turing thesis is alright if we take it that the space/time needed is infinite. Having said that, I imagine that the distinction between the theoretical and the implementation claim is often overlooked, which makes the proof integral in making that distinction very sharp indeed."[From N.] Response: Thank you for reading. But, sorry, I beg to differ. Even if given infinite space and infinite time (which we allow the Turing Machine anyway), no computer that you define once and for all (and are not allowed to change ever again) can solve the problems that I define. The issue is not with infinite space and infinite time. The issue is: How many operations can you do in one slice of time? You have to define this for every theoretical (and of course practical) computer. Otherwise, analysis of algorithms becomes meaningless, and the running time of a program a void concept. For example, the Turing Machine can do one, two, or three operations per slice of time, depending on your definition, but it is always a finite number. It can read a symbol, then change state, or write a symbol, or move left or right or stay put. Then a new iteration starts. It cannot do an arbitrary number of these fundamental operations per slice of time (you may call the latter a step, an iteration, a time unit, whatever). In passing, I should say that it is this finiteness of the number of operations per time unit that caused the great success of computer science, making the Computer the greatest invention of the 20th century: A machine designed once and for all that can simulate any operation by another machine. In theory, you should be able to buy a computer and use it for the rest of your life, for it will never be faced with a computable function that it cannot compute. Or so we thought ... Suppose you have defined your machine once and for all (it can be a Turing Machine or a Supercomputer, or the chip in your digital camera or that in your toaster). I will now give you a computable function that it fails to compute. One example of such a computation is the problem of operating on variables that change with time (as described above). Even if given all the time in the world from here to eternity, even if given all the space in the Universe and beyond, you could not solve the problem, not even in theory with pen and paper. Why? Because you defined your machine and fixed it once and for all (as one should if one claims to have a Universal Computer that can simulate anything that another computer can compute). And herein lies the Achilles heel of the Universal Computer: I, as a designer of the problem, ask you for a definition of your purported Universal Computer; I then concoct a computation that thwarts it. This computation, of course, can be easily carried out by another "Universal Computer", but then I'll give you another problem, and so on. There is nothing here about implementation. I put no limits on what your machine can do, except for one: It must do a finite number of operations per time unit. This number can be as big as you want, it can even be defined by a function of time, but it must be fixed the moment you define your model, and it cannot be infinite. Once you define it you cannot change it. We can play the game with paper and pencil. I will always win. I believe that the result is perfectly theoretical as well as being perfectly practical. Misconception 5: "While I could not make much sense of your "proof" (right off the bat, I do not think 'spacially and temporally connected variables' is well defined, although I also think it suffers from many other flaws) I decided to take your challenge seriously, within the bounds and the scope of your restrictions, language and assumptions, in an attempt to define a reasonable sounding computational device that would solve the time-varying variables computation. I will assume that the variables appear each on a luminous display of some sort. I am going to assume relativity (please let me know if spacially and temporally connected variables is supposed to mean something else?). There is space "k" between displays so that, when they update their values, the light from their screens arrive at my computational device at nearly, but not precisely the same time (determined by 'k' and where my device is in the 'room'). I will move my computational device close to the displays so that I can exploit the hypotenuse the signals from each display must travel first to reach the sensor of my device. Then, it is simply a matter of setting the processing speed to a finite value around 'a constant * k meters / the speed of light.' In this way, my computational device is able to compute the function F as desired. My device has one sensor. It only needs one sensor, since the signals from the displays come in sequentially. Remember that the displays are separated by a distance and the signal from each display, after they update, must travel this distance (at the speed of light). So my device uses its one sensor to read in a display and then it reuses that sensor some time later after the signal from the next display has reached it (some time later)." Response: Right off the bat, as is clear from my papers, the spatial and temporal relationship among the variables is such that: 1) The variables occupy the same region in physical space; specifically, the distance separating any two variables is no larger than a small constant (whose magnitude depends on the general paradigm under consideration); 2) The variables are constrained by a unique parameter representing true physical time. Now turning to your solution, I would say it is original; unfortunately, I am afraid you are missing the point of the exercise. The problems I pose are to be treated independently of specific physical properties. For example, the time-varying variables could be anything one would want them to be (atmospheric pressures for instance, or humidity readings, etc.), and they should not necessarily be on display (they would need to be acquired, that is, measured, first). Light rays, the basis of your argument, should play no role in the solution (for they may not help in general). However, let's consider your one-sensor solution, assuming the variables are indeed displayed. The difficulty with such setups is that they inevitably break down at some point as the problem scales up. Specifically, 1) The dimensions of the sensor are fixed. 2) As the number of variables, and hence displays, N, grows, the angle of the light rays from the furthest display approaches 0. There will not be enough real estate on the sensor for that light to impinge upon. 3) Thus, the problem poser can make N sufficiently large so as to render the sensor useless. And one more thing: How does your sensor handle multiple simultaneous (or perhaps overlapping) inputs from displays all equidistant from it? Criticism 1: "Your definition of computation is unconventional." Response: Absolutely, if one considers unconventional the passage of time, or the interactions among the constituents of the universe subject to the laws of nature, or for that matter any form of information processing. In any case, my definition of computation may be unconventional, but it is not unrealistic. Besides, it is important to realize that defining "to compute" as "what the Turing Machine does", which is quite commonly done (see, for example, L. Fortnow, The enduring legacy of the Turing machine, http://ubiquity.acm.org/article.cfm?id=1921573), leads to a logical reasoning that is viciously circuitous. If computation is what the Turing Machine does, then clearly the Turing Machine can compute anything that is computable. Furthermore, the "Church-Turing Thesis" would move immediately from the realm of "conjecture" to that of "theorem". The fact that it has not done so to this day, is witness to the uncertainty surrounding the widespread definition of "computation". As stated above, my counterexamples, by contrast, disprove the "Church-Turing Thesis". Criticism 2: "But the Turing Machine was not meant for this kind of computation." Response: Precisely. Furthermore, the non-universality result covers all computers, not just the Turing Machine. Criticism 3: "Abstract models of computation do not concern themselves with input and output." Response: This opinion is held by those who believe that computation is the process that goes from input to output, while concerning itself with neither input nor output (see, for example, Fortnow cited above). By analogy, eating is all about digestion, and a Moonlight Sonata interpretation is nothing but the hitting of piano keys. Perhaps. Perhaps not. A model of computation is useful to the extent that it is a faithful reflection of reality, while being mathematically tractable. In computer science, input and output are not cumbersome details to be ignored; they are fundamental parts of the computation process, which must be viewed as consisting of three essential phases, namely, input, calculation, and output. A computer that does not interact with the outside world is useless. In that sense, the thermostat in your house is more powerful than the Turing Machine. Please remember that my result goes beyond the Turing Machine (which, by the way, is a very primitive model of computation). To ask computer scientists to stick to the Turing Machine and not to look beyond, would be as if physics stopped progressing beyond the knowledge of the Ancient Greeks. Criticism 4: "Classical computability theory does not include variables that change with time." Response: This is a serious lacuna in classical theory. My work shows that there are in fact many such lacunae. Their result is to severely restrict the definition of computation. Indeed, to define computation merely as function evaluation, with fixed input and fixed output, is unrealistic and naive. It also trivializes the Church-Turing Thesis (turning it into a tautology), because it necessarily leads to the kind of sterile circular reasoning mentioned above. Having said that, the time-varying variables counterexample is only one of many such refutations of universality in computation. Other examples arise in the computations listed above and detailed in the publications below. Criticism 5: "The Church-Turing Thesis applies only to classical computations." Response: This is certainly not the case. As the multitude of examples listed in my paper amply demonstrate, the commonly accepted statement of the Church-Turing Thesis is essentially this: "There exists a Universal Computer capable of performing any computation that is possible on any other computer." There are no restrictions, exceptions, or caveats whatsoever on the definition of computation. In fact, a typical textbook definition of computation is as follows: "A computation is a process that obeys finitely describable rules." [See Rucker below] What's more, it is suggested in every textbook on the subject that, thanks to the fundamental and complementary notions of simulation and universality, every general-purpose computer is universal: A Turing Machine, a Random-Access Machine, a Personal Computer, a Supercomputer, the processing chip in a cell phone, are all universal. (My result shows this claim to be false.) Going a little further, many authors consider all processes taking place in the Universe as computations. Interested readers may consult: E.B. Davies. Building infinite machines. British Journal for Philosophy of Science, 52:671--682, 2001. D. Deutsch, The Fabric of Reality, Penguin Books, London, England, 1997. J. Durand-Lose. Abstract geometrical computation for black hole computation. Research Report No. 2004-15, Laboratoire de l'Informatique du Parallelisme, Ecole Normale Superieure de Lyon, Lyon, France, April 2004. J. Gleick. The Information: A History, a Theory, a Flood. HarperCollins, London, 2011. K. Kelly. God is the machine. Wired, 10(12), 2002. S. Lloyd. Programming the Universe. Knopf, New York, 2006. S. Lloyd and Y.J. Ng. Black hole computers. Scientific American, 291(11):53--61, November 2004. R. Rucker. The Lifebox, the Seashell, and the Soul. Thunder's Mouth Press, New York, 2005. C. Seife. Decoding the Universe. Viking Penguin, New York, 2006. T. Siegfried. The Bit and the Pendulum. John Wiley & Sons, Inc., New York, 2000. F.J. Tipler. The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. Macmillan, London, 1995. T. Toffoli, Physics and Computation, International Journal of Theoretical Physics, 21:165--175, 1982. V. Vedral. Decoding Reality. Oxford University Press, Oxford, United Kingdom, 2010. J.A. Wheeler. Information, physics, quanta: The search for links. In Proceedings of the 3rd International Symposium on Foundations of Quantum Mechanics in Light of New Technology, pages 354--368. Tokyo, 1989. J.A. Wheeler. At Home in the Universe. American Institute of Physics Press, Woodbury, New York, 1994. S. Wolfram. A New Kind of Science. Wolfram Media, Champaign, Illinois, 2002. K. Zuse. Calculating Space. MIT Technical Translation AZT-70-164-GEMIT, Massachusetts Institute of Technology (Project MAC), Cambridge, Mass. 02139, 1970. I happen to agree with the above authors on the pervasive nature of computation. However, in order to reach my conclusion about non-universality in computation, I do not in fact need to go that far. My counterexamples use very simple computations to refute universality: computations whose variables change with time, computations whose variables affect one another, computations the complexity of whose steps changes with the passage of time, computations the complexity of whose steps depends (not on the time but) on the order of execution of each step, computations that involve variables obeying certain mathematical conditions, and so on. These are not uncommon computations. They may be considered unconventional only when contrasted with the standard view of computation as a rigid function evaluation, in which, given a variable x, it is required to compute f(x), in isolation of the rest of the world. Criticism 6: " Actually, Alan Turing knew that computations involving physical time would cause problems for his machine." Response: This is the back-pedaling argument par excellence. Perhaps Turing knew about the computations that I describe here, but I doubt it. The Church-Turing Thesis is proof that he and Church were convinced of the universality of the Turing Machine. In any case, I am not aware of any writings by Turing, von Neumann, or anybody else, that hint to non-universality in computation, prior to my January 2005 paper on the myth of the universal computer. This type of argument, exemplified by Criticism 6, reminds me of the words of the American philosopher William James: "First, you know, a new theory is attacked as absurd; then it is admitted to be true, but obvious and insignificant; finally it is seen to be so important that its adversaries claim that they themselves discovered it." (William James, Pragmatism: A new name for some old ways of thinking, Lecture 6: Pragmatism's conception of truth, Longman Green and Co., New York, 1907, pp. 76--91.) Criticism 7: "The problem I see with the non-universality claim is that it does not appear to be falsifiable. It might be improved by thinking further about how to make the claim more specific, clear, testable, well-defined and worked out into detail. The entire claim would ideally be described in a few very worked out, specific, sentences - needing little external exposition or justification." Response: Have you had a chance to read any of my papers? There is no lack of mathematical formalism there. For example in one of my papers, M. Nagy and I used quantum mechanics to illustrate the first five of the six aforementioned paradigms. I am certain you will enjoy that one (see TR 2007-537). But let me try this since you would like something simple and crisp. How about the best understood problem in computer science, arguably sorting, but with a twist? Here is the problem: For a positive even integer n, where n is greater than or equal to 8, let n distinct integers be stored in an array A with n locations A[0], A[1], ..., A[n - 1], one integer per location. Thus A[j], for all j = 0, 1, ..., n-1, represents the integer currently stored in the jth location of A. It is required to sort the n integers in place into increasing order, such that: 1. After step i of the sorting algorithm, for all i greater than or equal to 1, no three consecutive integers satisfy: A[j] > A[j + 1] > A[j + 2] for all j = 0, 1, ..., n-3. 2. When the sort terminates we have: A[0] < A[1] < ... < A[n - 1]. Can you solve this problem using a universal computer, one that is fixed once and for all? This is an example of computations on which a mathematical condition is to be obeyed at every step. (The answer is in my TR 2011-580.) Criticism 8: "A fine student just presented your Non-Universality in Computation work in my graduate theory course. Going simply on her presentation, I am not impressed. Your page starts by quoting Hopcroft and Ullman and saying that it is clearly false. You took this quote out of the context of all of theoretical computer science which clearly defines that a computation' is to start with the entire input presented and is given as much time as it wants to read this input and do its computation. It is true that since Turing, the nature of computation has changed to require real time interactions with the world. But you should not misrepresent past work. Having not studied your arguments at length, the only statement that I have gotten is that a machine is unable to keep up if you only allowed it to read in a fixed number of bits of information per time step and you throw at it in real time an arbitrarily large number of bits of information per time step. In itself, this statement is not very deep." Response: I am sure the student did a wonderful job, but somehow the message did not get across. I will take your remarks one by one. Remark A: "You took this quote out of the context of all of theoretical computer science which clearly defines that a computation' is to start with the entire input presented and is given as much time as it wants to read this input and do its computation." Response to Remark A: 1) I did not find your definition of computation' anywhere. It is certainly not in Hopcroft and Ullman's book. 2) Your definition of computation' applies narrowly to some primitive computational models, such as the Turing Machine, Cellular Automata, and so on. 3) Because of (2), your definition trivializes the Church-Turing Thesis, rendering it a tautology: By defining computation' as what the Turing Machine does', it obviously follows that the Turing Machine can compute everything that is computable'. As mentioned earlier (see the response to Criticism 1), this is a typical example of circular reasoning. 4) Your definition is not sufficiently general to capture the richness and complexity of the notion of computation'. Others have proposed more encompassing definitions of computation'. Here are a few quotes: "In a sense Nature has been continually computing the next state' of the universe for billions of years; all we have to do - and actually, all we can do - is hitch a ride' on this huge ongoing computation." Tommaso Toffoli, Physics and Computation, International Journal of Theoretical Physics, 21, 1982, 165-175. "Computation is physical; it is necessarily embodied in a device whose behaviour is guided by the laws of physics and cannot be completely captured by a closed mathematical model. This fact of embodiment is becoming ever more apparent as we push the bounds of those physical laws." Susan Stepney, The neglected pillar of material computation, Physica D, 237(9):1157--1164, 2004. "Information sits at the core of physics ... every particle, every field of force, even the space-time continuum itself derives its function, its meaning, its very existence entirely ... from answers to yes-or-no questions ... all things physical are information-theoretic in origin ... " John Archibald Wheeler, Information, physics, quantum: The search for links, in: W. Zurek (ed.) Complexity, Entropy, and the Physics of Information, Addison-Wesley, Redwood City, CA, 1990. "Think of all our knowledge-generating processes, our whole culture and civilization, and all the thought processes in the minds of every individual, and indeed the entire evolving biosphere as well, as being a gigantic computation. The whole thing is executing a self-motivated, self-generating computer program." David Deutsch, The Fabric of Reality, Penguin Books, London, England, 1997. 5) And one more thing about your point that a computation is to "start with the entire input presented". Since all of my counterexamples indeed require the entire input to be "present", I will assume that you mean "start with the entire input residing in the memory of the computer". (Incidentally, some of my counterexamples assume the latter as well.) The relevant point here is this: Hopcroft himself has a well-known algorithm that makes no such assumption about the entire input residing in memory. As stated in response to Misconception 3 above, one of the most dramatic illustrations of the interweaving of input and output with calculation is the well-known linear-time planarity testing algorithm of Hopcroft and Tarjan. In order to determine whether a graph with n vertices is planar, Step 1 of that algorithm avoids a computation that could potentially run in time at least quadratic in n, by reading in the edges of the graph one at a time from the outside world; if there is one more edge than the absolute maximum stipulated by Euler's formula, namely 3n - 6 (the paper actually uses the loose bound 3n - 3 in order to allow for n < 3), the graph is declared non-planar.(J. Hopcroft and R. Tarjan, Efficient planarity testing, J. ACM 21(4), 1974, pp. 549-568.) Remark B: "It is true that since Turing, the nature of computation has changed to require real time interactions with the world. But you should not misrepresent past work." Response to Remark B: 1) I am glad to see that you agree with me about the nature of computation. You should know, however, that my counterexamples to universality are not all about "real time interaction with the world". There is a list of such counterexamples at the beginning of this article, and a list of references to my papers near the end of the article. One counterexample involving mathematical constraints (it is a variant of sorting, in which the entire input is available in memory at the outset of the computation) is described in my response to Criticism 7. Note also that my nonuniversality result applies to putative universal computers' capable of interaction with the outside world. These universal computers' are endowed with unlimited memories and are allowed an indefinite amount of time to solve the problems they are given. They all still fail the test of universality. 2) However, I do take exception to the claim that I misrepresented past work. Below are quotes from famous computer scientists asserting, without caveat, exception, or qualification that a universal computer is possible (often explicitly stating that such a universal computer is the Turing Machine, and essentially taking for granted the unproven Church-Turing Thesis): "A Turing machine can do everything that a real computer can do." M. Sipser, Introduction to the Theory of Computation, PWS Publishing Company, Boston, MA, 1997, p. 125. "Anything which can be computed can be computed by a Turing Machine." S. Abramsky et al, Handbook of Logic in Computer Science, Clarendon Press, Oxford, 1992, p. 123. "It is theoretically possible, however, that Church's Thesis could be overthrown at some future date, if someone were to propose an alternative model of computation that was publicly acceptable as fulfilling the requirement of finite labor at each step' and yet was provably capable of carrying out computations that cannot be carried out by any Turing machine. No one considers this likely." H.R. Lewis and C.H. Papadimitriou, Elements of the Theory of Computation, Prentice Hall, Englewood Cliffs, NJ 1981, p. 223. "...if we have shown that a problem can (or cannot) be solved by any TM, we can deduce that the same problem can (or cannot) be solved by existing mathematical computation models nor by any conceivable computing mechanism. The lesson is: Do not try to solve mechanically what cannot be solved by TMs!" D. Mandrioli and C. Ghezzi, Theoretical Foundations of Computer Science, John Wiley, New York, NY, 1987, p. 152. "...any algorithmic problem for which we can find an algorithm that can be programmed in some programming language, any language, running on some computer, any computer, even one that has not been built yet but can be built, and even one that will require unbounded amounts of time and memory space for ever-larger inputs, is also solvable by a Turing machine." D. Harel, Algorithmics: The Spirit of Computing, Addison-Wesley, Reading, MA, 1992, p. 233. "It is possible to build a universal computer: a machine that can be programmed to perform any computation that any other physical object can perform. ... any computation that can be performed by any physical computing device can be performed by any universal computer, as long as the latter has sufficient time and memory." D. Hillis, The Pattern on the Stone, Basic Books, New York, NY, 1998, pp. 63 - 64. Hundreds of such statements can be found in the literature. I provide a sample in TR 2006-511 (see below). Finally, please note that to correct previous mistakes is not to misrepresent the past. This is how science advances. Newton, Darwin, and Einstein were giants who built great scientific edifices. Each edifice, magnificent yet incomplete. Remark C: "Having not studied your arguments at length, the only statement that I have gotten is that a machine is unable to keep up if you only allowed it to read in a fixed number of bits of information per time step and you throw at it in real time an arbitrarily large number of bits of information per time step. In itself, this statement is not very deep." Response to Remark C: 1) As mentioned above, the "time-varying variables" computation is but one of many counterexamples to universality. 2) As you well know, a reasonable model of computation must be finite. The Turing Machine has a finite alphabet, a finite set of states, and a finite set of elementary operations per step. To assume otherwise would render the fields of complexity theory and algorithm design and analysis useless. 3) Your characterization of my result is erroneous. I describe computable functions that no machine that claims to be universal can compute. 4) I ask you to define a "universal computer" fulfilling the requirement of finite labor at each step' (to quote Lewis and Papadimitriou again), and I will give you a computable function that it cannot compute. Which, finally, brings us to the challenge. A Computational Challenge: Anyone who still does not accept my result, namely, that universality in computation is a myth, has but one option: to prove it wrong. In order to do this, one must exhibit a universal computer capable of a finite and fixed number of operations per time unit, on which any one of the computations in the following classes, and described in the papers below, can be performed: 1. Computations with time-varying variables 2. Computations with time-varying computational complexity 3. Computations with rank-varying computational complexity 4. Computations with interacting variables 5. Computations with global mathematical constraints. 6. Computations with uncertain time constraints. "Facts do not go away when scientists debate rival theories to explain them." Stephen Jay Gould, Evolution as fact and theory, Discover 2, May 1981, pp. 34 - 37. ## References Technical Reports Technical Report No. 2006-511 (363.504 Kbytes) Akl, S.G., "Universality in computation: Some quotes of interest", Technical Report No. 2006-511, School of Computing, Queen's University, Kingston, Ontario, April 2006, 13 pages. Anyone planning to read the papers below, should read these quotes first; they provide the context. PDF (62.880 Kbytes) Technical Report No. 2005-492 (351.098 Kbytes) Akl, S.G., "The myth of universal computation", Technical Report No. 2005-492, School of Computing, Queen's University, Kingston, Ontario, January 2005, 21 pages. PDF (241.714 Kbytes) Technical Report No. 2005-495 (209.272 Kbytes) Nagy, M. and Akl, S.G., "On the importance of parallelism for quantum computation and the concept of a universal computer", Technical Report No. 2005-495, School of Computing, Queen's University, Kingston, Ontario, May 2005, 18 pages. PDF (208.849 Kbytes) Technical Report No. 2005-500 (186.273 Kbytes) Nagy, M. and Akl, S.G., "Quantum computing: Beyond the limits of conventional computation", School of Computing, Queen's University, Kingston, Ontario, July 2005, 16 pages. PDF (185.825 Kbytes) Technical Report No. 2006-507 (174.989 Kbytes) Nagy, M. and Akl, S.G., "Coping with decoherence: Parallelizing the quantum Fourier transform", School of Computing, Queen's University, Kingston, Ontario, March 2006, 12 pages. PDF (164.186 Kbytes) Technical Report No. 2006-508 (225.224 Kbytes) Akl, S.G., "Even accelerating machines are not universal", Technical Report No. 2006-508, School of Computing, Queen's University, Kingston, Ontario, March 2006, 16 pages. PDF (214.144 Kbytes) Technical Report No. 2006-510 (3,020.507 Kbytes) Fraser, R. and Akl, S.G., "Accelerating machines", School of Computing, Queen's University, Kingston, Ontario, March 2006, 26 pages. PDF (267.668 Kbytes) Technical Report No. 2006-526 (322,653 Kbytes) Akl, S.G., "Unconventional computing problems", School of Computing, Queen's University, Kingston, Ontario, November 2006, 21 pages. PDF (300,513 Kbytes) Technical Report No. 2007-537 (374,825 Kbytes) Nagy, N. and Akl, S.G., "Parallelism in quantum information processing defeats the Universal Computer", Technical Report No. 2007-537, School of Computing, Queen's University, Kingston, Ontario, June 2007, 28 pages. PDF (326,151 Kbytes) Technical Report No. 2009-561, (3,802.636 Kbytes) Akl, S.G., "Time travel: A new hypercomputational paradigm", School of Computing, Queen's University, Kingston, Ontario, July 2009, 18 pages. PDF (304.904 Kbytes) Technical Report No. 2011-580, (686.848 Kbytes) Nagy, N. and Akl, S.G., "Time inderterminacy, non-universality in computation, and the demise of the Church-Turing thesis", School of Computing, Queen's University, Kingston, Ontario, August 19, 2011, 27 pages. PDF (166.530 Kbytes) Technical Report No. 2013-609, (141.312 Kbytes) Akl, S.G., "Nonuniversality in computation: Thirteen misconceptions rectified", School of Computing, Queen's University, Kingston, Ontario, August 14, 2013, 26 pages. Technical Report No. 2015-625, (100.680 Kbytes) Akl, S.G. and Salay, N., "On computable numbers, nonuniversality, and the genuine power of parallelism" School of Computing, Queen's University, Kingston, Ontario, July 29, 2015, 13 pages. Technical Report No. 2018-634, (298.193 Kbytes) Akl, S.G., "Unconventional wisdom: Superlinear speedup and inherently parallel computations", School of Computing, Queen's University, Kingston, Ontario, February 23, 2018, 16 pages. This work has been published as follows (publications listed in reverse chronological order): Journal papers Akl, S.G., "Unconventional wisdom: Superlinear speedup and inherently parallel computations", International Journal of Unconventional Computing, Vol. 13, Nos. 4--5, 2018, pp. 283 - 307. Akl, S.G., "Nonuniversality explained", International Journal of Parallel, Emergent and Distributed Systems, Vol. 31, Issue 3, May 2016, pp. 201 - 219. Akl, S.G. and Salay, N., "On computable numbers, nonuniversality, and the genuine power of parallelism", International Journal of Unconventional Computing, Vol. 11, Nos. 3--4, 2015, pp. 283 - 297. Akl, S.G., "What is computation?", International Journal of Parallel, Emergent and Distributed Systems, Vol. 29, Issue 4, August 2014, pp. 337 - 345. Nagy, N. and Akl, S.G., "Computing with uncertainty and its implications to universality", International Journal of Parallel, Emergent and Distributed Systems, Vol. 27, Issue 2, April 2012, pp. 169 - 192. Akl, S.G., "Time travel: A new hypercomputational paradigm", International Journal of Unconventional Computing, Vol. 6, No. 5, 2010, pp. 329 - 351. Fraser, R. and Akl, S.G., "Accelerating machines: a review", International Journal of Parallel Emergent and Distributed Systems, Vol. 23, No. 1, February 2008, pp. 81 - 104. Akl, S.G., "Unconventional computational problems with consequences to universality", International Journal of Unconventional Computing, Vol. 4, No. 1, 2008, pp. 89 - 98. Akl, S.G., "Even accelerating machines are not universal", International Journal of Unconventional Computing, Vol. 3, No. 2, 2007, pp. 105 - 121. Nagy, M. and Akl, S.G., "Quantum computing: Beyond the limits of conventional computation", International Journal of Parallel, Emergent and Distributed Systems, Special Issue on Emergent Computation, Vol. 22, No. 2, April 2007, pp. 123 - 135. Nagy, M. and Akl, S.G., "Quantum measurements and universal computation", International Journal of Unconventional Computing, Vol. 2, No. 1, 2006, pp. 73 - 88. Akl, S.G., "Three counterexamples to dispel the myth of the universal computer", Parallel Processing Letters, Vol. 16, No. 3, September 2006, pp. 381 - 403. Conference papers Nagy, N. and Akl, S.G., "Computations with uncertain time constraints: Effects on parallelism and universality", Proceedings of the Tenth International Conference on Unconventional Computation, Turku, Finland, June 2011, pp. 152--163. Akl, S.G., "Gödel's incompleteness theorem and nonuniversality in computing", Proceedings of the Workshop on Unconventional Computational Problems, Sixth International Conference on Unconventional Computation, Kingston, Canada, August 2007, pp. 1 - 23. Nagy, M. and Akl, S.G., "Parallelism in quantum information processing defeats the Universal Computer", Proceedings of the Workshop on Unconventional Computational Problems, Sixth International Conference on Unconventional Computation, Kingston, Canada, August 2007, pp. 25 - 52. Nagy, M. and Akl, S.G., "On the importance of parallelism for quantum computation and the concept of a universal computer", Proceedings of the Fourth International Conference on Unconventional Computation, Sevilla, Spain, October 2005, pp. 176 - 190. Book chapters Akl, S.G., "Unconventional computational problems", in: Encyclopedia of Complexity and Systems Science, Springer, New York, 2018, pp. 631 - 639. Akl, S.G., "Nonuniversality in computation: Fifteen misconceptions rectified", in: Advances in Unconventional Computing, Admatzky, A., Ed., Springer, Cham, Switzerland, 2017, pp. 453 - 465. Akl, S.G. and Salay, N., "On computable numbers, nonuniversality, and the genuine power of parallelism", Chapter Four in: {\em Emergent Computation: A Festschrift for Selim G. Akl}, Adamatzky, A., Ed., Springer, Cham, Switzerland, 2017, pp. 57 - 69. Akl, S.G. and Nagy, M., "Introduction to Parallel Computation", in: Parallel Computing: Numerics, Applications, and Trends, Trobec, R., Vajtersic, M., and Zinterhof, P., Eds., Springer-Verlag, London, 2009; Chapter 2, pp. 43 - 80. Akl, S.G. and Nagy, M., "The Future of Parallel Computation", in: Parallel Computing: Numerics, Applications, and Trends, Trobec, R., Vajtersic, M., and Zinterhof, P., Eds., Springer-Verlag, London, 2009; Chapter 15, pp. 471 - 510. Akl, S.G., "Evolving Computational Systems", in: Parallel Computing: Models, Algorithms, and Applications, Rajasekaran, S. and Reif, J.H., Eds., Taylor and Francis, CRC Press, Boca Raton, Florida, 2008; Chapter 1, pp. 1 - 22. Akl, S.G., "Conventional or Unconventional: Is Any Computer Universal?", in: From Utopian to Genuine Unconventional Computers, A. Adamatzky and C. Teuscher, Eds., Luniver Press, Frome, United Kingdom, 2006, pp. 101 - 136. Akl, S.G., "The Myth of Universal Computation", in: Parallel Numerics, Trobec, R., Zinterhof, P., Vajtersic, M., and Uhl, A., Eds., Part 2, Systems and Simulation, University of Salzburg, Salzburg, Austria and Jozef Stefan Institute, Ljubljana, Slovenia, 2005, pp. 211 - 236. Invited Lecture S.G. Akl, Ubiquity and simultaneity: The science and philosophy of space and time in unconventional computation, Keynote address, Conference on the Science and Philosophy of Unconventional Computing, The University of Cambridge, Cambridge, United Kingdom, 2009. ## Computing in the Presence of Uncertainty Contributor:Selim Akl This project is concerned with computations whose characteristics are akin to certain unique phenomena that occur in different domains of science. We are particularly interested in systems whose parameters are altered unpredictably whenever one of these parameters is measured or modified. Examples of such computational environments include those in which Heisenberg's uncertainty principle of quantum physics is witnessed, or those in which Le Chatelier's principle of chemical systems under stress manifests itself. A study of these systems uncovers computations that are inherently parallel in the strong sense, meaning that they are efficiently executed in parallel, but impossible to carry out sequentially. For details, please see: Akl, S. G., Coping with uncertainty and stress: A parallel computation approach (159.053 Kbytes) PDF (163.394 Kbytes) Technical Report No. 2003-466 (1,040.918 Kbytes) Akl, S.G. and Yao, W., "An application of parallel computation to dynamical systems" School of Computing, Queen's University, Kingston, Ontario, June 2003, 10 pages. PDF (149.723Kbytes) Technical Report No. 2003-470 (215.401 Kbytes) Akl, S.G. and Yao W., "Parallel computation and measurement uncertainty in nonlinear dynamical systems", Technical Report No. 2003-470, School of Computing, Queen's University, Kingston, Ontario, September 2003, 15 pages. PDF (193.333 Kbytes) Technical Report No. 2004-490 (430.713 Kbytes) Cordy, B.J. and Akl, S.G., "Parallel computation and avoidance of chaos", Technical Report No. 2004-490, School of Computing, Queen's University, Kingston, Ontario, December 2004, 9 pages. ## Parallel Real-Time Computation Contributor:Selim Akl The prevalent paradigm of computation, to which everyone who uses computers is accustomed, is one in which all the data required by an algorithm are available when the computer starts working on the problem to be solved. A different paradigm is real-time computation. Here, not all inputs are given at the outset. Rather, the algorithm receives its data (one or several at a time) during the computation, and must incorporate the newly arrived inputs in the solution obtained so far. Often, the data-arrival rate is constant; specifically, N data are received every T time units, where both N and T are fixed in advance. A fundamental property of real-time computation is that certain operations must be performed by specified deadlines. Consequently, one or more of the following conditions may be imposed: 1. Each received input (or set of inputs) must be processed within a certain time after its arrival. 2. Each output (or set of outputs) must be returned within a certain time after the arrival of the corresponding input (or set of inputs). Thus, for example, it may be crucial for an application that each input be operated on as soon as it is received. Similarly, each partial solution (as well as the final one) may need to be returned as soon as it is available. Our work shows that within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class, when computed in parallel, is far superior in speed and quality to the best one obtained on a sequential computer. Recent results are described in the following papers: Technical Report No. 2002-457 (154.972 Kbytes) Akl, S.G., "Computing in the presence of uncertainty: Disturbing the peace", School of Computing, Queen's University, Kingston, Ontario, June 2002, 12 pages. Technical Report No. 2001-453 (185.378 Kbytes) Akl, S.G., "Discrete steepest descent in real time", Department of Computing and Information Science, Queen's University, Kingston, Ontario, November 2001, 14 pages. (An updated version is also available.) Technical Report No. 2001-444 (470.096 Kbytes) Nagy, N. and Akl, S.G., "The maximum flow problem: A real-time approach", Department of Computing and Information Science, Queen's University, Kingston, Ontario, March 2001, 20 pages. Technical Report No. 2001-443 (270.314 Kbytes) Akl, S.G., "Superlinear performance in real-time parallel computation", Department of Computing and Information Science, Queen's University, Kingston, Ontario, March 2001, 17 pages. (An updated version is also available.) Akl, S.G., Parallel real-time computation: Sometimes quantity means quality (305.998 Kbytes) Technical Report No. 2000-441 (262.382 Kbytes) Nagy, M. and Akl, S.G., "Real-time minimum vertex cover for two-terminal series-parallel graphs", Department of Computing and Information Science, Queen's University, Kingston, Ontario, October 2000, 18 pages. Technical Report No. 2000-438 (357.361 Kbytes) Bruda, S.D. and Akl, S.G., "Pursuit and evasion on a ring: An infinite hierarchy for parallel real-time systems", Department of Computing and Information Science, Queen's University, Kingston, Ontario, September 2000, 19 pages. Technical Report No. 2000-435 (353.105 Kbytes) Bruda, S.D. and Akl, S.G., "Real-time computation: A formal definition and its applications", Department of Computing and Information Science, Queen's University, Kingston, Ontario, February 2000, 23 pages. Technical Report No. 99-433 (195.232 Kbytes) Akl, S.G., "Nonlinearity, maximization, and parallel real-time computation", Department of Computing and Information Science, Queen's University, Kingston, Ontario, November 1999, 16 pages. Technical Report No. 99-429 (144.700 Kbytes) Bruda, S.D. and Akl, S.G., "On the power of real-time Turing machines: k tapes are more powerful than k-1 tapes", Department of Computing and Information Science, Queen's University, Kingston, Ontario, July 1999, 8 pages. (An html version is also available.) Technical Report No. 99-428 (222.493 Kbytes) Bruda, S.D. and Akl, S.G., "Towards a meaningful formal definition of real--time computations", Department of Computing and Information Science, Queen's University, Kingston, Ontario, July 1999, 16 pages. (An html version is also available.) Technical Report No. 99-424 (194.588 Kbytes) Akl, S.G. and Bruda, S.D., "Parallel real-time numerical computation: Beyond speedup III", Department of Computing and Information Science, Queen's University, Kingston, Ontario, May 1999, 16 pages. Technical Report No. 99-423 (173.991 Kbytes) Akl, S.G. and Bruda, S.D., "Parallel real-time cryptography: Beyond speedup II" , Department of Computing and Information Science, Queen's University, Kingston, Ontario, May 1999, 13 pages. Technical Report No. 99-422 (178.174 Kbytes) Akl, S.G., "Secure file transfer: A computational analog to the furniture moving paradigm", Department of Computing and Information Science, Queen's University, Kingston, Ontario, March 1999, 17 pages. Technical Report No. 99-421 (147.813 Kbytes) Akl, S.G. and Bruda, S.D., "Parallel real-time optimization: Beyond speedup", Department of Computing and Information Science, Queen's University, Kingston, Ontario, January 1999, 12 pages. (An html version is also available.) Technical Report No. 98-420 (285.097 Kbytes) Bruda, S.D. and Akl, S.G., "A case study in real-time parallel computation: Correcting algorithms", Department of Computing and Information Science, Queen's University, Kingston, Ontario, December 1998, 24 pages. (An html version is also available.) Technical Report No. 98-418 (217.130 Kbytes) Bruda, S.D. and Akl, S.G., "The characterization of data-accumulating algorithms", Department of Computing and Information Science, Queen's University, Kingston, Ontario, August 1998, 13 pages. (An html version is also available.) Technical Report No. 98-417 (205.725 Kbytes) Bruda, S.D. and Akl, S.G., "On the data-accumulating paradigm", Department of Computing and Information Science, Queen's University, Kingston, Ontario, April 1998, 13 pages. (An html version is also available.) ## New Paradigms in Parallel Computation Contributor:Selim Akl There are two aspects to this work: 1. The study of computational problems which are fundamentally different from the traditional problems in computer science. Examples of such problems occur in situations where the input data are received in real time, or vary as a function of time, or when the results affect subsequent inputs, or are needed by a certain deadline. They also occur in computations where computers interact with their environment, and move about it freely and autonomously. These paradigms represent a radical departure from our conventional view of computing, where computation' is regarded as the process of evaluating a function using static data all of which are available at the outset. 2. To explore models of computation that exploit results from other branches of knowledge to further our understanding of the nature of computation. Instances of such models include ones developed, for example, in optics, quantum physics, biology, and chaotic dynamical systems. # RECENT WORK ## Multiuser Detection in Cellular Basestations Contributor:Selim Akl The aim of this project is to design a computational solution to the problem of multiuser detection and interference cancellation in wireless cellular basestations. This approach would allow all users in a cellular radio system wishing to communicate to be detected simultaneously at the base station. An important component of the project is the analysis and implementation of parallel algorithms for solving large systems of linear equations. (This is joint work with P.J. McLane, N. Majikian, and V.C. Hamacher of the Department of Electrical and Computer Engineering, and is funded by CITO, the Communications and Information Technology Ontario centre of excellence.) ## Broadcasting with Selective Reduction as a Model of Parallel Computation Contributor:Selim Akl Broadcasting with selective reduction (BSR) is a model of parallel computation. It is an extension of the Parallel Random Access Machine (PRAM), the most popular model of parallel computation. It supports all form of memory access allowed by the PRAM, that is, Exclusive Read (ER), Exclusive Write (EW), Concurrent Read (CR), and Concurrent Write (CW), with an additional BROADCAST instruction which allows all N processors to gain access to all M memory locations simultaneously for the purpose of writing. It has been shown that while an implementation of BSR requires no more resources than the weakest variant of the PRAM, namely, the EREW PRAM, BSR is more powerful than the strongest variant of the PRAM, namely, the CRCW PRAM. The BROADCAST instruction consists of three phases: broadcasting, selection, and reduction. The broadcast phase allows each of the N processors to write a record concurrently to all of the M memory locations. Each processor's record contains two fields, a tag and a datum, where the tag helps identify those locations in which the datum is to be stored. At the receiving end, each memory location selects a subset of the receiving data by comparing the tag value of the datum with a limit value, associated with that memory location, using a selection rule. In the last phase, the data selected by each memory location are reduced to a single value using a binary associative reduction rule, and this value is finally stored in that memory location. Many BSR algorithms have been designed to solve different traditional problems. For all the computational problems addressed so far, constant time solutions have been found using the BSR model These solutions illustrate the power and elegance of the model. ## Computing with Optical Pipelined Buses Contributors:Selim Akl and Sandy Pavel There are a number of fundamental constraints which bound bus interconnections in parallel (electronic) systems in general: limited bandwidth, capacitive loading, and cross-talk caused by mutual inductance. Another limitation concerns the classical assumption of exclusive access to the bus resources, which limits throughput to a function of the end-to-end propagation time. Optical communications have emerged as an alternative solution to these problems. Unlike the signal propagation in electronic buses, which is bidirectional, optical channels are inherently directional and have predictable delay per unit length. This allows a pipeline of signals to be created by the synchronized directional coupling of each signal at specific locations along the channel. The possibility in optics to pipeline the transmission of signals through a channel provides an alternative to exclusive bus access. Using this kind of spatial parallelism the end-to-end propagation latency can be amortized over the number of parallel messages active at the same time on the bus. We have introduced a computational model which incorporates some of the advantages and characteristics of two existing models, namely reconfigurable meshes and meshes with optical buses. It consists of a reconfigurable array which uses optical pipelined buses for communication (AROB). This model extends the capabilities of the arrays with optical pipelined communications, by using more powerful switches and different reconfiguration rules. Some of the results obtained for the AROB model also hold for other types of arrays with optical pipelined buses, thus extending the results previously obtained for these models, and making it possible to better understand the implications of using optical interconnections for massively parallel processing. The new model is shown to be extremely flexible, as demonstrated by its ability to efficiently simulate different variants of PRAMs, bounded degree networks and reconfigurable networks. A number of applications of the AROB are presented, and its power is investigated. Our initial investigations revealed that this architecture is suitable for massively parallel applications in different areas as low level image processing (Hough transform, distance transforms), sparse matrix operations (multiplications, transpose), etc. As future work we intend to study the algorithmic implications of using this model to solve different problems which arise in communications, digital signal processing and on-board satellite data processing. ## The Optical Hypermesh: Architectural Properties and Algorithms Contributors:Selim Akl and Peter J. Taillon With the advances being made in optical engineering, we are seeing many architects adapting traditional interconnection models to take advantage of the new technology. The mesh and lattice networks are well-studied architectures, for which there exist many useful algorithms, but depending on the communication patterns dictated by the computation, the performance can be limited by their large diameter. The optical hypermesh is a mesh-based interconnection which uses fiber-optic multichannel switches to provide a communication path between nodes aligned along a particular dimension. Compared to a hypercube of similar size, the hypermesh matches the O( 1 ) cost for permutations over nearest neighbours, while maintaining a significantly smaller diameter and a reduced physical interconnection complexity. ## Bounds and Algorithms for Cayley Graphs Contributors:Selim Akl and Tanya Wolff Many optimal algorithms have been found on the Hypercube network. A hypercube of degree n has n edges incident at each vertex. The nth dimension link joins a vertex u to v where v differs from u in the nth bit of the binary representations. An attractive alternative to the Hypercube with smaller degree and diameter was proposed by Akers, Harel and Krishnamurthy in 1986 called the Star. An n-Star (Sn) has n! vertices, labelled by the n! permutations of the set {1,2,..,n}. It has n-1 links incident to each vertex, and each neighbour of vertex u along dimension d, 2<=d<=n, swaps the first and the dth symbols in the label of u. The Pancake model (Pn) is similar to the Star, except the vertices adjacent to u along dimension d are labelled by "flipping" the first d symbols of u. Currently, the best known algorithm for sorting on the n-Star has time complexity O(n^3 log n). Sequentially, sorting can be done in O(n!logn!) time, so to have an optimal cost, a star with n! processing elements should take O( logn! ) = O( nlogn ) time. Further reseach into achieving cost-optimal algorithms will include investigation of embedding known networks such as the hypercube and mesh into the Star and Pancake, which are inter-embeddable with a dilation cost of O( n ), i.e. the maxium shortest distance between f(x) and f(y) where f is the emedding from vertices of Sn (or Pn) to vertices of Pn (or Sn) and (x,y) is an edge of Sn (or Pn). The exact diameter of the Pancake is not known and will be investigated. ## Parallel Algorithms for Circular Arc Graphs Contributors:Selim Akl, B. Bhattacharya (Simon Fraser University), and L. Chen (Fundamental Research Laboratory) Several efficient parallel algorithms were recently developed for problems defined on proper circular arc graphs. These include: 1. An optimal parallel algorithm to compute a largest cardinality subset of arcs each two of its members intersect. An interesting feature of the algorithm is that it transforms the computational geometric problem at hand, to a problem involving computations on 0-1 matrices, and then transforms the latter back into a ray shooting problem in computational geometry. 2. Algorithms for finding a maximum matching, partitioning into a minimum number of induced subgraphs each of which has a Hamiltonian cycle (path), partitioning into induced subgraphs each of which has a Hamiltonian cycle (path) with at least k vertices for a given k, and adding a minimum number of edges to make the graph contain a Hamiltonian cycle (path). We show that all these problems can be solved in logarithmic time with a linear number of EREW PRAM processors, or in constant time with a linear number of BSR processors. A more important part of this work is perhaps the extension of the basic BSR model to allow simultaneous multiple BROADCAST instructions. Contributors:Selim Akl and Lorrie Fava Lindon The role of computers in society is expanding at a rapid pace and is not restricted to the bounds set out by the theory of parallel computation as it is currently formulated. In particular, dynamic computers which interact with their environment do not conform to the limitations of the speedup and slowdown (or Brent's) theorems. The phenomenon of a disproportionate decrease in execution time of p over q processors, for p > q, is referred to as superunitary speedup. An analogous phenomenon, called superunitary 'success ratio' occurs in dealing with tasks that can either succeed or fail, when there is a disproportaionate increase in the success of p over q processors executing a task. A range of conditions were identified which lead to superunitary speedup of superunitary 'success ratio'. Furthermore, several new paradigms for problems which admit such superunitary behavious were proposed. These results suggest that a new theory of parallel computation may be required to accomodate the new paradigms. ## An Implementation of Multiple Criteria BSR and its Applications Contributors:Selim Akl and I. Stojmenovic (University of Ottawa) 1. A detailed description was provided of a BSR implementation that allows each datum to be tested for satisfaction of k criteria. Previously, all implementation of BSR and all problems solved on it had assumed that one selection criterion is applied to each datum in order to determine wheter or not it participates in the reduction process. 2. It was shown how a number of computational geometric problems can be solved on this new implementation in constant time. These problems include computing the Voronoi diagram, vertical segment visibility, and rectangle enclosure in d dimensions. Their constant time solutions are the first of this kind for the number of processors used. Furthermore, these solutions allow an effective illustration of the power and elegance of BSR as shown by the conciseness and simplicity of the algorithms it affords. ## Communication and Fault Tolerance Algorithms on a Class of Interconnection Networks A number of interconnection networks for parallel computers are studied. These networks include, for example, the star, the generalized hypercube, and the multidimensional torus. The following results are obtained for these networks: 1. Optimal solutions to the communication problems of multinode broadcasting, single node scattering, and total exchange, under two different assumptions, namely single link availability and multiple link availability. 2. Fault-tolerant communication algorithms for the star based on the construction of multiple edge-disjoint spanning trees. In addition, a general framework was derived for communication on a subclass of Cayley graph based networks. ## The Star Interconnection Network: Properties and Algorithms Contributors:Selim Akl and Ke Qiu The star network has received considerable attention recently as an attractive topology for interconneting processors in a parallel computer. Investigation of this network has led to the following results: 1. A characterization of the cycle structure of the star and a derivation of properties of its breadth first spanning tree. 2. Several algorithms for computing on the star, including algorithms for routing, broadcasting, and computing prefix sums, as well as algorithms for geometric and graph theoretic problems. 3. Algorithms for efficient load balancing on the star with applications to the problems of selection and sorting. ## Parallel Computation of Weighted Matchings in Graphs Contributors:Selim Akl and Constantine N.K. Osiakwan Optimal parallel algorithms, designed to run on the EREW PRAM model of computation, were obtained for solving the following problems: 1. Computing a maximum weight perfect matching for complete weighted graphs. 2. Computing a minimum weight perfect matching for a set of points in the plane. 3. The assignment problem on complete weighted bipartite graphs. 4. The assignment problem for points in the plane. 5. Computing matchings in trees.
# Can I find a mapping that minimizes the maximum distance ratio of certain vectors? Let's say we have several vector points. My goal is to distinguish the vectors, so I want to make them far from each other. Some of them are already far from each other, but some of them can be positioned very closely. I want to get a certain mapping function that can separate such points that are close to each other, while still preserving the points that are already far away from each other. I do not care what is the form of the mapping. Since the mapping will be employed as pre-processing, it does not have to be differentiable or even continuous. I think this problem is somewhat similar to 'minimizing the maximum distance ratio between the points'. Maybe this problem can be understood as stretching the crushed graph to a sphere-like isotropic graph. I googled it for an hour, but it seems that the people are usually interested in selecting the points that have such nice characteristics from a bunch of data, rather than mapping an existing vector points to a better one. So, in conclusion, I could not find anything useful. Maybe you can think 'the neural network will naturally learn it while solving classification problem'. But it failed. Because it is already struggling with too many burdens. So, this is why I want to help my network with pre-processing. • isn't it what kernels do? May 20 '20 at 20:43 • which kernel do you mean? – Jun May 21 '20 at 3:54 • The general concept of kernel trick in kernel methods in machine learning. such as SVM. the kernels do what you're saying: transforming to a new hyperspace which a discriminative hyperplane can easier separate classes. May 21 '20 at 7:58 • this is interesting, but, why do you want to do this? Jun 19 '20 at 16:58 • "vector points" means "set of points described as vectors" ? Is a 1D, 2D, 3D, ... problem ? The maximum distance will be reached send each point to the infinite (except one of them at origin). Do you have restrictions for the base space ? Oct 17 '20 at 17:59 An interesting question. I would start by finding n nearest neighbors of each data point, then calculate their center of mass c and the point's distance d to its nth nearest neighbor. The smaller the d is the larger the density is around a given point. You could then iteratively step every point away from their c in an inverse proportion to the distance d with a suitable step size. This would spread out the clusters. But this won't help you transform any new points in the dataset, maybe you can learn this arbitrary mapping R^n -> R^n by using an other neural network and apply it to new samples? This is the first ad-hoc idea which came to my mind. It would be interesting to see a 2D animation of this. A more rigorous approach might be a variational autoencoder, you can embed the data in a lower dimensional space with approximately normal distribution. But it doesn't guarantee that clusters would be as spread out as you'd like. An alternative loss function would help with that, for example every point's distance to their original nth closest neighbor should be as close to one as possible. • Thanks for your advise. I also considered about using k-nearest neighbor(kNN) based scheme, but as you know it includes max (or min) function in it. So I think it is very hard to use kNN related metric as a loss function to train VAE, even though the goal mapping itself do not have to be a differentiable one. But maybe some relaxed version of it, such as stochastic NN could be a naive replacement. – Jun May 21 '20 at 3:58 • After studying neural networks more I recently came up with a new idea (might not be novel): train an autoencoder (it doesn't matter if the encoded "code" dimension is smaller or same size as the input) and add extra penalty to the "code" activations if they are too close on known dissimilar inputs. I am about to write a blog article about this at some point. Jan 5 at 21:28 • Too late to edit the previous comment. Surely this is not a novel idea, it combines an autoencoder with a contrastive loss. But wikipedia's article on contractive autoencoder mentions only positive samples, which makes it more of a denoising autoencoder IMO. Jan 5 at 21:37
# Different forms of compactness and their relation Given a topological space X one can define several notion of compactness: X is compact if every open cover has a finite subcover. X is sequentially compact if every sequence has a convergent subsequence. X is limit point compact (or Bolzano-Weierstrass) if every infinite set has an accumulation point. X is countably compact if every countable open cover has a finite subcover. X is σ-compact if it is the union of countably many compact subspaces. X is pseudocompact if its if its image under any continuous function to $\mathbb{R}$ is bounded. X is paracompact if every open cover admits an open locally finite refinement (i.e. every point of X has a neighborhood small enough to intersect only finitely many members of the cover). X is metacompact if every open cover admits a point finite open refinement (i.e. if every point of X is in only finitely many members of the refinement). X is orthocompact if every open cover has an interior preserving open refinement (i.e. given an open cover there is a open subcover such that at any point, the intersection of all open sets in the subcover containing that point is also open). X is mesocompact if every open cover has a compact-finite open refinement (i.e. given any open cover, we can find an open refinement such that every compact set is contained in finitely many members of the refinement). So, there are quite a few notions of compactness (there are surely more than those I quoted up here). The question is: where are these definitions systematically studied? What I'm interested in particular is knowing when does one imply the other, when does it not (examples), &c. I can fully answer the question for the first three notions: Compact and first-countable --> Sequentially compact. Sequentially compact and second-countable --> Compact. Sequentially compact --> Limit-point compact. Limit point compact, first-countable and $T_1$ --> Sequentially compact. but I'm absolutely ignorant about the other cases. Has this been systematically studied somewhere? If so, where? - You will certainly find interesting Counterexamples in topology by L.A.Steen and J.A.Seebach, Jr. –  Pietro Majer Aug 30 '10 at 21:52 Years ago I spent some time thinking about ultrafilters. Recall that a filter on a set $X$ is a collection of subsets of $X$ such that if $A,B \subseteq X$ are in the filter, so is $A\cap B$, and if $A\supseteq B \supseteq X$ and $A$ is in the filter, then so is $B$. A filter is proper if it does not contain the empty set, and an ultrafilter is a maximal proper filter. Given a filter on $X$ and a topological space $Y$, a limit of a function $f : X\to Y$ is a point $y\in Y$ such that every open containing $y$ pulls back to an element of the filter. For example, letting (continued) –  Theo Johnson-Freyd Aug 31 '10 at 3:31 (continuation) $X = \mathbb N$ and picking the cofinite filter (a subset of $\mathbb N$ is in the filter if its complement is finite) returns the usual Cauchy definition of the limit of a sequence. If $Y$ is compact (and Hausdorff) and we pick an ultrafilter on $X$, then every function has a unique limit (necessarily, if the ultrafilter contains the cofinite filter, to an accumulation point). On the other hand, my adviser for the project gave an example of a sequence in $[0,1]$ and an ultrafilter on $\mathbb N$ so that no convergent (in the cofinite filter) subsequence pulled (continued) –  Theo Johnson-Freyd Aug 31 '10 at 3:37 (continuation) back to an element of the ultrafilter. The reason I bring this up is that I think it illustrates some big differences between sequential and limit-point compactness. The reason I leave it as an extended comment is that it really is not an answer to your question. (The correct answer to your question being the one Pietro Majer has already given, namely that Counterexamples in topology is an excellent book, and almost certainly contains what you're looking for.) –  Theo Johnson-Freyd Aug 31 '10 at 3:39
Not logged in [Login ] Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » New member, Båtsmans introduction and getting glassware. Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues Pages:  1 Author: Subject: New member, Båtsmans introduction and getting glassware. batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood New member, Båtsmans introduction and getting glassware. Hey, guys. My first post here. I have been lurking around for a while, but now was that time. I guess i have to introduce myself then. I am happy family guy from northern europe. I have always been a science freak, but a while back i was given Dr. Shulgins legendary book "Pihkal". I started reading it, but it was some hard stuff to read, and especially when you can´t understand half of whats written. So i started studying organic chemistry online, i bought and borrowed books from the library. So when i have the time i take a break, sit myself down and try to get some reading done. And, time is not the easiest thing to get when you have small children, and a nagging wife. So what i am planning on doing, is to get myself some distillation lab glassware and start experimenting. I have about 400-500 dollars to spend. I am thinking of cutting the budget up. Glassware for about 200-300 $, heating mantle or hotplate/stirrer for about 100$. I haven´t found a heater i feel that i can use. I found one that were heat mantle and stirrer, but the most of them i have seen are made for tiny round bottom flasks (100ml). I feel like i really would need a heater with a magnetic stirrer in. I guess that you guys probably would advice me to get one thats combined. I wont do any hard stuff at first ofcourse. Stuff like extracting caffine from coffee and recrystallization. I think it would be great to learn by doing it. Stuff like extracting caffine from coffee, and crystallization. And nice to get some knowledge about chemicals too. was thinking the i was extracting caffine and recrystallize and stuff like that. If you So, as i just wrote; i am going to get some glassware soon. I just have to be really sure of what i am getting, that the guy who sells the stuff is a honest guy, and the glassware is of decent quality, at least. I have found a bunch of ppl and companies that sells laboratory and organic chemistry kits, and i have chosen two sellers that i feel have been getting good feedback for years, and i feel conciders this as their business pretty serious. My top 2 are both selling organic chemistry kits, and distillation kits. And that what i am thinking about. One of them have lower prices then the other, and thats great for me. but he seems like he got more of a quality brand glassware, and that´s great for me. But that might be becouse that the quality is lesser. And also another wierd thing is that all of the adapters, flasks glass joints are 24/29, not the "standard" 24/40. Can i buy an adapter that makes the 24/29 to 24/40 C u guys in a while. Oopsy_daisy Harmless Posts: 15 Registered: 24-6-2013 Location: Sweden Member Is Offline Mood: No Mood There are adapters from 24/29 to 24/40 and vice versa but they are surprisingly expensive at around 10 $a piece. A simple googlesearch of "24/29 to 24/40 adapter" will give you some results. sonogashira International Hazard Posts: 555 Registered: 10-9-2006 Member Is Offline Mood: No Mood You don't need an adapter. 24 is the width, 29 is the length. You can use any 24 width joint interchangeably. The only difference will be that the 40 length joint will protrude into the flask a little, but it will make no practical difference. I don't think that you would be able to buy an adaptor even if you wanted to. If you could change your mind about a 100ml flask being 'tiny' you will save yourself a huge amount of money on chemicals and glassware in the long-term. You can easily make gram amounts in a 100ml flask. If you use a 250ml flask and have 1 litre of solvent you can do only 4 experiments; if you have a 25ml flask you can do 40 experiments! I use B10 5ml flasks! Maybe B10 is too small for a beginner, but I would recommend that you get some B14 or B19 glassware too, so that you can optimize your method without wasting too much material. Scaling up from 50ml scale to 500ml scale is better than wasting ten times the amount of chemicals on a failed reaction. If you use TLC for optimization experiments then only microgram amounts are needed - in which case a 5ml flask is extravagance itself! [Edited on 5-7-2013 by sonogashira] Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. Quote: Originally posted by batsman I feel like i really would need a heater with a magnetic stirrer in. I guess that you guys probably would advice me to get one thats combined. Hello Batsmans and welcome to the community. Sometimes stirring is needed/helpful when you are heating but it is not always needed. If the reactants are boiling away on reflux then it is self-stirring. I have a Corning mag-stirrer-hotplate that I can place under any of my Glas-Col (birds nest type*) heating mantles. These mantles have no stirring capability themselves. This works out well. *my brother refers to these as breast warmers [Edited on 5-7-2013 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem sargent1015 National Hazard Posts: 315 Registered: 30-4-2012 Location: WI Member Is Offline Mood: Relaxed Quote: Originally posted by batsman just that the guy who sells the stuff is a honest guy, and the glassware is of decent quality, at least. Dr. Bob, we have another customer for you! If you are willing to piece together your kit, he has some great items at even better prices! The Home Chemist Book web page and PDF. Help if you want to make Home Chemist history! http://www.bromicacid.com/bookprogress.htm adamsium Hazard to Others Posts: 180 Registered: 9-4-2012 Location: \ƚooɿ\ Member Is Offline Mood: uprooting Quote: Originally posted by sonogashira I use B10 5ml flasks! Where did you find a good range of B10 glassware? I've had a bit of a look but only found a few things, not really enough for a full set. I actually have quite a lot of 14/20 glassware on its way from Laboy at present, with flask sizes from 5 mL to 100 mL (and I'd consider 100 mL to be pretty huge for most applications, certainly not tiny). I already have a 24/40 set, but it's just too big; small / semi-micro scale is much better suited to most amateur work (and even a lot of work just in general). Aside from the obvious cost benefit, it's also safer and disposal (when necessary) is less hassle. sonogashira International Hazard Posts: 555 Registered: 10-9-2006 Member Is Offline Mood: No Mood I got it all from ebay. Occasionally you will find an old boxed set of Quickfit glassware - a chemistry set from the 1950/60's - but they are rare, and usually attract many bids. Most of the time I ask people who are selling a few pieces of B10 glassware whether they have any more. If they do they will tend to sell it quite cheaply, and quite often they will have more (usually pieces from an old chemistry set). There is a good selection of parts in B10 glassware, and it is a nice scale to work with, and definitely suitable for experimentation with a new reaction. I've taken a picture of a few of the parts that I could find. This is on an A4 piece of paper, for scale. There are round bottom flasks (up to 100ml) missing, and air condenser, drying tubes etc, but this is most of the pieces, I think. I find that 100ml flasks are slightly too large for B10. Given the working area of the condenser, I will usually put an air condenser in-line if I work on 100ml scale, but I tend to scale up directly from B10 25ml to B19 250ml flasks for preparative work. For comparison, there is a microlitre flask in the picture, to the right. This is 300 ul capacity, with a long air condenser (open-ended melting-point tube) fitted using teflon. Using these it's to very quick to optimize a reaction. Using microlitre syringes, to transfer solutions of solid reagents, at known concentrations, one can very easily vary the quantity of a reagent as 1 equiv., 2 equiv....10 equiv., into different flasks (which are very cheap), then simply spot the reaction mixtures next to each other on a TLC plate. It is quickly evident where the optimum yield is obtained. One can go further and optimize between 4.1 equiv., 4.2 equiv., 4.3 equiv. etc. etc. using fresh flasks in a very short amount of time, using only milliliters and milligrams of reagent and solvent. I put them directly on a sand bath if heating is needed, and the air condenser, due to it's length, works very well for most solvents. B14 is a nice scale too (except that the 5ml flasks look ugly!), and probably more versatile if you want to use adapters to different joint sizes. Attachment: Picture.zip (1.2MB) This file has been downloaded 290 times [Edited on 6-7-2013 by sonogashira] Finnnicus National Hazard Posts: 342 Registered: 22-3-2013 Member Is Offline Why did you post this twice? Yes, nice to meet you too. -(Not sure if I should report this). bfesser Resident Wikipedian Threads Merged 6-7-2013 at 05:04 adamsium Hazard to Others Posts: 180 Registered: 9-4-2012 Location: \ƚooɿ\ Member Is Offline Mood: uprooting That's really neat, sonogashira! The microlitre flask is really cool and I've never seen one like that. Is it a custom piece, or can you actually buy these? (I have tried some searching, but I'm not even sure what to call it) I'm interested to see what the really tiny 14/20 flasks will look like, actually. The smallest 24/40 one I have is 25 mL and that looks kind of silly. That's also a really tiny volumetric flask. The smallest I have is 5 mL, but that looks like it might even be a bit smaller than mine. I've worked with a Radley's Carousel type setup for optimisation, and it was fantastic - very small scale (using the tubes) and a really clever system with the integrated 'condenser' and argon/vacuum ports, etc. What is the item that is almost in the top left corner and has the long, narrow glass tube protruding from its base and two hose barbs coming off the top part, almost at right-angles? batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood sonogashira: You changed my mind actually. It makes sense. In that case maybe i should get a 100ml or 250ml heating mantle with stirrer instead? If i get a 500ml or a 250ml mantle, can i use it to heat a 100ml flask? I can´t see why now, but i can´t be sure. On ebay, who are you getting your gw from? I have found a 4-5 different sellers that (i think) have a good selection, but there is probably someone out there that has a better deal, and with a better selection. Magpie: Thanx. What do you mean by self-stirring? Would you say that it would be ok to get a mantle with just heating, and use the just until i have the money to get a stirrer? The thing is that i have this damn budget, i only got about 100$ to spend on a heater/stirrer, but sonogashira opened my eyes to using the smaller flasks, and in stat case i maybe could get a mantle with both heating and stirring for under 100. And yes, breast warmers. Isn´t that what it is? sargent1015: I didn´t get that. Who is Dr Bob? adamsium: Whats considered a full setup? I am getting alot of "aha´s" by you guys today. Maybe i should get a 14/20 set instead then. That´s good news for me and my budget. I am new to the glassware so i barely know anything, but im learning. And then maybe i should skip the larger ml flasks for now? This is my first setup, and i won´t do any large scale reactions with it. Thanx mate. Finnicus: I don´t know what happend. I didn´t mean to post it twice. Ps. What is the big difference between 14/20 and 24/40, besides the obvious differ in size? [Edited on 6-7-2013 by batsman] Hazard to Others Posts: 180 Registered: 9-4-2012 Location: \ƚooɿ\ Member Is Offline Mood: uprooting A 'full setup' would very much depend on what you want to do. For synthetic organic chemistry, I'd say the minimum I'd consider a full setup would be some flasks (obviously), at least a liebig or similar condenser, distillation head and a take-off. There is, of course, a lot more that you could get, but much of it would be of little use when just starting out. One other item that would be desirable, though, is a separatory funnel. Dr Bob is a member of this forum and has a lot of nice glassware that he is selling at great prices and many members have bought from him and are very happy. I believe his stocks are dwindling on many items, but it would be worth seeing what he has available. Many people (myself included) have purchased glassware from Laboy glassware. They are based in China, but their glassware is of good quality and the prices are fantastic. The difference is really just the size. This is important, though. Consider this - you have a 10 gallon drum, and you want to do a very small scale synthesis in it. You place your reagents in this 'reaction vessel' - the total volume of the reagents/solvent is 5 mL. You then tip the drum upside down to pour out your product. How much do you expect to get out? Almost certainly none - your losses will likely (almost certainly) be 100% in this case - it will be lost on the walls of the vessel and even by evaporation due to the very large surface area. This is obviously an extreme example to illustrate the point. plante1999 International Hazard Posts: 1937 Registered: 27-12-2010 Member Is Offline Mood: Mad as a hatter I have 14/20 5ml flasks, and find them quite large in fact, easily worked with. However, 0.3ml reaction vials from microscale kit are small, you add a drop of water and you see the level rise of easily 4-5 mm. I never asked for this. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. Quote: Originally posted by batsman Magpie: Thanx. What do you mean by self-stirring? Would you say that it would be ok to get a mantle with just heating, and use the just until i have the money to get a stirrer? When the reactants are boiling there is a lot of turbulence. So it really doesn't need a mechanical stirrer. You can do a lot of chemistry without a stirrer. But at times they are essential. Quote: Originally posted by batsman Ps. What is the big difference between 14/20 and 24/40, besides the obvious differ in size? These numbers stand for the diameter/length of the joint in mm. 14/20 is used for small scale work like 1 to 10 grams, 19/22 is used for a medium size work like 10 to 200g, and 24/40 is large scale like 100 to 500g. Personally, I like to work in the medium range so often use my 19/22 glassware. When making reagents I like 24/40. But the arguments of those advocating 14/20 for economy, waste minimization, etc, are good. It's like buying a boat - whatever you get is a compromise. There is no "one size fits all." [Edited on 6-7-2013 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem sonogashira International Hazard Posts: 555 Registered: 10-9-2006 Member Is Offline Mood: No Mood Quote: Originally posted by adamsium That's really neat, sonogashira! The microlitre flask is really cool and I've never seen one like that. Is it a custom piece, or can you actually buy these? (I have tried some searching, but I'm not even sure what to call it) I'm interested to see what the really tiny 14/20 flasks will look like, actually. The smallest 24/40 one I have is 25 mL and that looks kind of silly. That's also a really tiny volumetric flask. The smallest I have is 5 mL, but that looks like it might even be a bit smaller than mine. I've worked with a Radley's Carousel type setup for optimisation, and it was fantastic - very small scale (using the tubes) and a really clever system with the integrated 'condenser' and argon/vacuum ports, etc. What is the item that is almost in the top left corner and has the long, narrow glass tube protruding from its base and two hose barbs coming off the top part, almost at right-angles? I got mine from here (item G9): http://www.ebay.com/itm/Blown-Glass-Bottle-Sticky-Cap-Stoppe... I had a sample of similar flasks and I found that this design was by far the best. The round-bottomed flask is difficult to remove liquid from, but the pear-shaped design allows for all of the liquid to be removed. One can also do simple 2-phase extractions, and extract the lower layer etc. with this design. I bought 200 and they are very suitable for my purpose, and certainly cheaper than buying micro-scale sets which are stupidly priced. The use of a 1ml syringe, with a needle, is ok for adding solvent, but I use microlitre syringes measuring reagents. By placing them into flattened sand on an aluminium plate, one can easily heat them safely. Sometimes I heat them to near boiling point then remove them from the heat and push the stoppers in, wrapped with a little teflon tape. The Radley Carousels are good. I've seen some stupidly expensive automated optimization units coupled to HPLC-MS etc. I'm more of a fan of TLC - shorter reaction time, smaller sample size, better resolution! The item you mention is an overhead-stirrer. It's powered by water pressure, compressed gas, or a vacuum (same thing). One puts a stopper in the top, then allows a gas to pass in one end and out the other. I use butane from a cylinder. It works very well, and needs only a very minimal gas-flow. (A 5ml flask looks lost under a B14 condenser. It's a collecting flask). batsman - If I were you, I would get the best B14 set that I could afford. [Edited on 6-7-2013 by sonogashira] batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood Hey, it me again. I just wanted to ask one thing. I have been checking out ebay if i could find any 14/20 or 19/22 glassware, and i found a couple, but only one company that sold 14/20 and 19/22 kits, but one. http://unitedglasstech.com/2009_catalog.htm But i don´t necassaraly need to buy a kit. I can put it together myself, but i am thinking of the price. And then its going to cost me to ship also. I wouldnt be surprised if that cost 100 bucks. Now i have all the money transfers and i have 411 bucks. But anyway, i have been mailing with this guy that sell very affordable glassware (don´t know about the quality), but he got kits that are 24/29. Has anyone ever heard about that before? Hexavalent International Hazard Posts: 1564 Registered: 29-12-2011 Location: Wales, UK Member Is Offline Mood: Pericyclic 24/29 seems to be more common in Europe than in the US, but I frequently use glassware of this size with no trouble. "Success is going from failure to failure without loss of enthusiasm." Winston Churchill Hazard to Others Posts: 180 Registered: 9-4-2012 Location: \ƚooɿ\ Member Is Offline Mood: uprooting I will have to look into getting some of those tiny flasks at some point - what a great idea! I agree that TLC is incredibly useful for these things - it's cheap and quick and gives you a really good idea of what is (or isn't!) going on. I've been planning to get some TLC plates, actually, but I was amazed to find out how expensive they are! I'll get some at some point. I have seen some stuff about automated optimisation (not specifically for the Radley's systems) and it is really cool, but I can imagine it's absurdly expensive. I'm sure it has its applications, but I think TLC is great. That overhead stirrer is ... just. so. cool. I'm assuming it's designed for the long 3-necked pear shaped flask immediately to its right. I've never seen or heard of anything like it. That ebay store also has small vials. I might order some from them and see what they're like. I'm also wondering if they can get amber ones; I'll shoot them a message to find out. I also agree with Magpie - there is no 'one size fits all' here. I personally like very small scale for the reasons I mentioned already, but I also just find very small scale, delicate work to be elegant in its own way. Also, as I said, I do have the larger set (24/40) if I need it for larger scale preparative work. batsman - I'd really suggest checking out Laboy. Their prices are much lower than United Glass Tech's and the quality is actually very good. Compare http://www.laboyglass.com/lab-kit and http://unitedglasstech.com/lab__kits.htm . You'll notice that UGT's price for a 14/20 kit is $300, while Laboy's is$135. Having said that, if you opt for a small size like 14/20, I'd suggest finding out if you can get smaller flasks since the standard ones for 14/20 kits are the same sizes as the ones for the larger-jointed sets and this would basically defeat the purpose of getting a small scale set up. Alternatively, just buy the items separately. Remember, too, that it really depends on what you want to do. If you really want to be performing syntheses resulting in large-ish amounts of product, 14/20 might be smaller than you really want. If you're more interested in just performing syntheses and other experiments for the experience, then you're probably better off with a small setup as it really makes no difference whether you end up with 1 g or 200 g of product in that case and you'll be able to do many more experiments with the same amount of reagents. You also mentioned costly shipping. Again, Laboy wins hands down here. Their shipping is a flat rate of $15 worldwide and it's even free for large orders (>$500, so not applicable here, but it's a good deal, regardless). I checked UGT's rate for one kit and it came to around $40. sonogashira International Hazard Posts: 555 Registered: 10-9-2006 Member Is Offline Mood: No Mood One can sometimes get brown 2ml HPLC vials quite cheaply. In fact I've been looking into making an automated synthesizer/optimizer using a gas chromatography injection unit/sample handler. All of the actions necessary for adding reagents and solvents to a vial, washing the needle, extracting small samples for testing etc. are there already. It would take only a little effort to amend it to automated synthesis. Even the computer software is provided; one simply has to change the pre-set parameters. If you're interested in TLC I'd recommend the old book by Egon Stahl (who popularized the technique) called Thin Layer Chromatography, if you can find it cheaply. It's remarkable how universally and accurately it can be used, even for quantitative analysis and identification of unknown samples. I was looking at the user manual for one of the old 1960's B10 sets that I mentioned. The diagrams are actually very useful for all the different set-ups that can be arranged with the basic set of (any-sized) glassware. I'm not sure if modern glassware kits come with user manuals of their own, but I thought it was good information and diagrams, which perhaps you will find useful batsman, as regards the different types of reaction that you may perform with interchangeable parts. Attachment: p4.zip (1.3MB) This file has been downloaded 264 times Attachment: p5.zip (1.3MB) This file has been downloaded 298 times Attachment: p6.zip (1.7MB) This file has been downloaded 270 times Attachment: p7.zip (1.8MB) This file has been downloaded 238 times Attachment: p8.zip (1.6MB) This file has been downloaded 286 times batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood Hey guys. What´s up? I have to say that i have learned so much just by reading your posts in this thread. I really appreciate that your are taking your time, trying to help me out. About the smaller ml flasks, that was genious to me. It really gave me one of those "aha, is that´s how it works"-moment, you know? I always thought that you use the big flasks to small reactions too. Anyway, i probably won´t, in a long time do large reactions that requires me to use a 1000ml flask. I checked Laboy out and i like their glassware very much. And it seems reliable to get it direct from the company too. Thanx. Also after i thought about what joint sizes that would suit me best, i decided to go with 19/22. 14/20 seemed a little bit to "fragile" to me, and the 24/40 was maybe too much for me right now when i am just starting up experimenting. About a "full setup", i am right now aiming just to get a fully functional distillation setup that i can do simple and vacuum distillations. Maybe even fractional, if i have the money. And next time i shop for gw, i get the reflux gear i need, and then the next after that, oil distillation, and so fourth. I would like you guys to take a look at what i am going to get at Laboy, and give your opinion, feedback, or what ever it can be. Laboy have so many different pieces that i even havent heard of before. All of them are 19/22. 3-way Claisen adapter. Distillation adapter 3-way. Vacuum take-off adapter, long stem. - (short or long stem?) (i was thinking of those "flow control" vacuum adapters, but do they work together with a aspirator?) Distilling adapter. Thermometer adapter 3-way. Inlet adapter - (?) DISTILLING COLUMN CONDENSER VIGREUX. Liebig or West condenser, with removable hose connection. - Which one, or both? Now we come to the single neck flasks. This i dont know much about, so a little bit help would be great. It seems like i always hear about RB, but nothing about the rest of them. They have RB, FB, erlenmyer, evaporation, pear shape and recovery and RB with 35/20 spherical joint. And about the 2-neck flasks, should i get 19/22 and 19/22, or what? The same thing with the 3-neck? I know there are many questions. I have many many more, so this is nothing. Love u guys for helping me out. adamsium Hazard to Others Posts: 180 Registered: 9-4-2012 Location: \ƚooɿ\ Member Is Offline Mood: uprooting 19/22 is probably a good 'best of both worlds' size. Also, if you're really new to chemistry, 14/20 even might be a bit small and feel a bit 'clumsy' when you're not used to this sort of equipment in any size, let alone really tiny stuff. 3-way Claisen adapter - This is essentially for using a single necked flask as a multi-necked flask. If you want to be able to, say, reflux while also adding reactants via a dropping funnel at the same time, then a claisen adapter is good. Otherwise, you might not need this. Distillation adapter 3-way - You will definitely want some sort of 3-way adapter. This one doesn't have a thermometer joint, and I actually couldn't find the thermometer compression fitting type adapter for sale individually on the Laboy site when I looked, although it is included in their kits. Perhaps either ask them if it is available separately, or, it may end up being more economical to just buy the setup as the kit, anyway. Vacuum take-off adapter, long stem - (short or long stem?) (i was thinking of those "flow control" vacuum adapters, but do they work together with a aspirator?) The short stem ones, I think, are more versatile, as you can use them with a very small receiving flask, but the long stem may be too long for some flasks (to be able to connect the joint, anyway). The flow control adapters are a very different thing. Distilling adapter - If you are getting the vacuum take-off adapter, you shouldn't need this, unless there's some particular reason that you do. Note that the vacuum take-off adapter can (and almost always should be) used even without vacuum. You do NOT want to be heating a closed system, so the vacuum port will serve as a pressure equalisation port, preventing explosion or leakage of the apparatus due to pressure build-up from heating. Thermometer adapter 3-way - If you're able to get the separate thermometer compression fit adapter, you wouldn't need this. There is also another one that they have, but only appears under the 'distillation' section, which has a ground glass thermometer joint, but this would require a jointed thermometer and you'd probably want a small stopper to stopper the hole if you didn't want to use a thermometer with it and didn't have the 'normal' 3-way adapter from above. Inlet adapter - Do you need this? This would generally be to apply vacuum or an inert gas such as argon to the apparatus. It's unlikely that you will have any use for this at this point. For a vacuum distillation, you'd simply use the vacuum port on the take-off adapter. Vigreux column - Obviously good if you want to do fractional distillations. Liebig or West condenser, with removable hose connection - I actually have wondered myself what others think of the merits of each. The West condenser is supposedly a 'more efficient' version of the Liebig condenser with a smaller jacket and vapour/condensate path, presumably decreasing the heating of the cooling fluid and increasing cooling of the vapour. Many of the standard kits I see seem to include both, but I doubt that you really need both. As for multi-necked flasks, if you're only planning on doing distillations for now, you really won't have any use for these. If you do want to be able to perform some reactions where multiple necks are required (for concurrent reflux and addition, for example), you can achieve this with a claisen adapter, as discussed above. As for joint sizes, it really depends on what you want to put in them (the joints). They mostly seem to have all joints on a given flask of the same size from Laboy, anyway, aside from the 2-neck RBFs. Recovery flasks are generally for use with rotary evaporators. Pear shaped flasks are good for distilling from and sometimes as reaction flasks. I wouldn't bother with spherical joints. Looking at this, I think it would be well worth considering the kit. It also includes a separatory funnel, which is indispensable for many procedures and can also double as a dropping funnel if needed. You can always add a few more items or just get them later. <hr> [edit:] I forgot to mention Dr Bob again. I hadn't mentioned him again because I'm pretty sure I remember him saying that he didn't have much left in 14/20 and that's what we'd been talking about. It might be worth asking him. Though, with shipping and such, Laboy may prove the cheaper option (especially if you're not in the USA). [Edited on 8-7-2013 by adamsium] batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood Hey, adam. Thanx for the responding and putting all this together just to answer my newbie questions. I really appreciate it. Yeah, i have thought that a kit would be better now for my first order, and as you said, i can always add stuff later. I got a mail this morning from sgv_. They said that they had a really great deal on a kit that is 24/40. I asked a few days ago if they had 19/22 kits also; they didnt. But he said that i just can react less volume in smaller flasks. Either way, i am almost 98 % sure that i will go with 19/22. It all comes down to money. These are the kits i have been thiking about getting, and they come in a case too. http://www.ebay.ca/itm/NEW-Advanced-Organic-Chemistry-Lab-Gl... http://www.ebay.ca/itm/Advanced-Organic-Chemistry-Lab-Glassw... I have about 400$ and have been counting in that i was going to buy a heater/stirrer also for the 400\$. I ut i know that after i get my stuff won´t start right up, becouse i don´t have any chems yet. So i thought i would put as much money i can on glassware this time. And later, in a couple of weeks when i am going to order the chems, i can get a good mag stirrer/heater. Sometimes i get too eager. Im rambling again. Whatever. Thanx adam. You have been to much assistance for me, my friend. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. Batsman take a look at the 19/22 kit at this UGT site: http://unitedglasstech.com/lab__kits.htm This is a typical kit for use in the teaching labs for 1st year university organic chemistry in the US. UGT makes or sells good glass. In my experience it is heavy walled. In some cases when I ordered from them they sent me better glass, like Wilmad. If the price is too high look for a similar kit on eBay, new or used. I think they come up fairly regularly. Another, perhaps better option, would be to assemble your own kit by buying the individual items from Dr Bob. He has top quality glassware. This kit, some beakers, and a heat source is most of what you need to be off and running in organic chemistry. If you would like to start out in inorganic chemistry to get some experience you won't need that kit. Some beakers, test tubes, bunsen burner, funnel, graduated cylinders, stirring rods, crystallizing dish, etc will do. Also, of utmost importance, is a good, secure place to experiment and store your chemicals. If you don't have a fume hood many of your experiments may need to be conducted outside. Jumping into chemistry without formal training and experience can be a little tough I imagine. But with your enthusiasm and willingness to study you can go a long way. Safety is another concern for someone with no training. You'll have to make up for that by using a lot of common sense. [Edited on 8-7-2013 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood Check this out. http://www.laboyglass.com/lab-kit/chemistry-distilling-set-2... ncluding; 1.500ml round bottom flask 2.1000ml round bottom flask 3. Thermometer three-way adapter 4. Vaccum take-off adapter 5.200mm distilling condenser\ 6.0-300°C range thermometer 7.4-pcs plastic joint All the joints abover are 24/40 The price is great, i actually think that im going to order it tonight. I want to do some reactions sometime in the future. So im getting single and multi-neck flasks for reactions. In the sizes, 25, 50, 100, 250 ml. Would you maybe advice me to get another type flask thats good to have? Flat bottom or pear shape flasks or something? If i would get some basic fractional distillation and reflux parts. Which ones would it be? Vigreux column for fractional. For reflux, i guess that a reflux condenser, like this http://www.laboyglass.com/reflux-condenser-large-cooling-cap... would be a start? Anything else, or do fractional distilllation and reflux require just the pieces i wrote? Found this site. So now i don´t have to bother you anymore. I think im getting ahead of myself, im will try to get the vacuum distillation setup going first. Thanx mate. batsman Harmless Posts: 36 Registered: 4-7-2013 Member Is Offline Mood: No Mood Im back! Just kidding, i forgot to ask. The Soxhlet extractor stands alone, right? Its not connected to the distilling apparatus, right? I can choose between 34/45, 45/50 and 55/50. Magpie lab constructor Posts: 5939 Registered: 1-11-2003 Location: USA Member Is Offline Mood: Chemistry: the subtle science. I don't have a "reflux" condenser per se - my regular West condenser that came with my kit works just fine for reflux. The second column you see in the UGT kit is a Liebig condenser. This should have been a Hempel column as it is in my kit, IMO. The Hempel is used as the fractional distillation column, and has 3 indents to support the packing. It is used in combination with the West condenser for fractional distillation. The fractional distillation column requires a packing like broken glass or a ss scrub pad. You can make the broken glass by smashing a used food jar. The ss scrub pad can be obtained very cheaply at your local hardware store. This column will be better than a Vigreux column for fractional distillation. The Laboy prices are quite low. I cannot comment on the quality of Laboy glassware as I have none. But I think several other members do own it. Edit2: This Laboy kit looks like it has the right equipment, IMO. Looking close by mouse at the "distillation" column I do see what looks like an indent for the packing support. I don't know if they have this in 19/22 also. To be fair to UGT, that's probably what they have for a distillation column in their kit also. They just call it a Liebig condenser, as it indeed can be used as a condenser. http://www.ebay.com/itm/New-Organic-chemistry-lab-glassware-... But, again, I don't know anything about Laboy's quality. [Edited on 8-7-2013 by Magpie] [Edited on 8-7-2013 by Magpie] [Edited on 8-7-2013 by Magpie] The single most important condition for a successful synthesis is good mixing - Nicodem Pages:  1 Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » New member, Båtsmans introduction and getting glassware. Select A Forum Fundamentals   » Chemistry in General   » Organic Chemistry   » Reagents and Apparatus Acquisition   » Beginnings   » Responsible Practices   » Miscellaneous   » The Wiki Special topics   » Technochemistry   » Energetic Materials   » Biochemistry   » Radiochemistry   » Computational Models and Techniques   » Prepublication Non-chemistry   » Forum Matters   » Legal and Societal Issues
Wednesday, December 31, 2008 CS: Thank you! While 2008 is about to shut down, I wanted to say thank you for your help. For the past two years, an increasing number of people have contributed to this blog. Direct contributors have included: and enriched it wonderfully. Some of them even created their own research blog, this is outstanding! Since I am slow, my understanding of some of the CS papers or some subject areas could not have occured were it not for a kind person to answer it. I got lucky often and most of the time, an e-mail conversation ensued that I eventually felt was important for everybody else to read. The following people kindly ok'ed our offline discussions into a full post or some part of it: Thank you! As a matter of general policy, if you are having a detailed technical e-mail discussion with me, I generally ask if it's OK to publish it afterwards. I usually go through some proof-reading and remove non technical items. Eventually the final draft is OK'ed by you. One of the annoying thing that occurs when I go to conferences is that I also bring a bunch of folks who will relentlessly ask questions. Thank you to Ron DeVore, Jean-Luc Germond and Bojan Popov for allowing us to disrupt their smooth meeting at Texas A&M. The same goes to Frank Nielsen for ECTV '08. Much of the referal traffic to this blog was not self generated! Terry Tao's, Andrew Gelman's and John Langford's blogs provide much of that traffic. Thank you guys. While I see less of this nowadays, I still feel strongly that as the author of your own paper, you should go through the pain of building a website which can take about 5 minutes using Google Sites these days. If you go through the effort of giving additional online information or adding your code, I generally try to make sure you have the maximum exposure as a result. I always prefer reproducible research results to hard to reproduce papers. Please also consider that even if an algorithm looks easy for you to implement, it may not be the case for most. In particular I intend on remaining slow and dumb for the continuing year (no good resolution for 2009 on that front). I also say it is a pain for you to have a greater web presence because most of you are currently not being judged by your peers on these "details". You eventually will! if not by your peers, you will by the engineers in companies that want to do business with you either as a future employee or a consultant. There is no better time to start than today. Recently, I had a discussion with one of you on the non-availability of the preprints on his site. The e-mail discussion that ensued revealed that the lab policy had not be updated in a long time. The folks at Sherpa provide you with a database for checking on your ability to perform self-archiving with specific publisher's copyright policies. Please use this tool as opposed to relying on an outdated policies or information. As an aside, if you believe this blog provides a valuable utility consider donating to the webcrawler engine that makes it all possible at watchthatpage. If you are using this service for other purposes please consider donating money to them as well. The paypal system accepts credit cards and additional information can be added in a message form where you might want to let them know about their huge contribution to the Nuit Blanche blog. The kids implementing this tool told me that it is a typical start-up with two guys working in a dorm. Again, this blog and the community of readers would not be what it is without them, please sparse a buck or twenty or a hundred, for some of you think of it as just a business expense. I do not own stock in that venture. Last but not least, the folks behind the codes and websites providing a resource to the CS community should be commended for doing an extraodinary job. They are listed in the Compressive Sensing 2.0 page. In particular, a big kudos goes to Mark Davenport for attending and regularly updating the Rice Compressive Sensing site. To the ones I forgot to list, please forgive me and do tell me. To all, thank you and Happy New Year. Enjoy watching the finale of Sidney's fireworks. CS: A Python Alternating L_1 Implementation Stephane Chretien just let me know that he and Alexis Flesch (a Ph.D. student working for Jean-Yves Dauxois ) has produced a Python implementation of the Alternating L_1 method. The implementation uses cvxopt, a Python Package for Convex Optimization. The graphs shows a perfect recovery with the alternating L_1 algorithm and the same non-recovery for the L_1 algorithm. Tuesday, December 30, 2008 CS: Compressive Sensing Background Subtraction Homework Found on Youtube this video illustrating an example of CS background subtraction which I think is an implementation of Compressive sensing for background subtraction  by Volkan CevherAswin SankaranarayananMarco Duarte, Dikpal Reddy, Richard Baraniuk, and Rama Chellappa. From the accompanying text of the video: Compressive Sensing based background subtraction. This video contains four parts top-left is the original sequence, bottom left is the reconstructed (using l1-minimization) difference image, top-right is the silhouette (estimated from bottom-left by median based thresholding) and bottom-right is the foreground reconstruction. ---- This was part of course project for Video Processing (Dr. Baoxin Li) in Fall 2008. Team: Raghavendra Bhat Rita Chattopadhyay This is funny as it reminded me a little bit the saliency maps in Laurent Itti's work. At the very least it could be used as a first iteration of these saliency maps as shown in this video. Monday, December 29, 2008 CS: Terence Tao Video presentation of Compressed Sensing As mentioned earlier, Terry Tao made a presentation at NTNU on Compressed Sensing. The fine folks at NTNU made a movie of it and put it on Youtube. Because of the length, it is cut in seven parts. I'll add it to the CS video section later. Enjoy. The same lecture is available in full here. However, I could not watch it using Chrome and Opera and it looks like that, once again , but in a less drastic way, the lecture can be read only with Internet Explorer and Firefox. Friday, December 26, 2008 CS: Group Testing in Biology, Tiny Masks for Coded Aperture and Linear Reconstruction, Andrew Ng Anna Gilbert, in her presentation entitled "Applications of Compressed Sensing to Biology", shows some of most recent results obtained with colleagues (one is Raghu Kainkaryam mentioned previously) performing Group Testing using Compressed Sensing ideas. The idea is to reduce the number of chips (SNP) for testing. The results are currently so so. The video is here. When watching Assaf Naor's video at the intractability center, I did not realize that there were that many negative results in the Johnson-Lindenstrauss approach in other spaces. At the Advanced Light Source at Lawrence Berkeley National Laboratory, we are investigating how to increase both the speed and resolution of synchrotron infrared imaging. Synchrotron infrared beamlines have diffraction-limited spot sizes and high signal to noise, however spectral images must be obtained one point at a time and the spatial resolution is limited by the effects of diffraction. One technique to assist in speeding up spectral image acquisition is described here and uses compressive imaging algorithms. Compressive imaging can potentially attain resolutions higher than allowed by diffraction and/or can acquire spectral images without having to measure every spatial point individually thus increasing the speed of such maps. Here we present and discuss initial tests of compressive imaging techniques performed with ALS Beamline 1.4.3?s Nic-Plan infrared microscope, Beamline 1.4.4 Continuum XL IR microscope, and also with a stand-alone Nicolet Nexus 470 FTIR spectrometer. They are devising coded aperture with micrometer masks. The reconstruction is a traditional linear transform. I am sure they will eventually look at a nonlinear reconstruction technique to get a better spatial resolution as did Roummel Marcia and Rebecca Willett in  Compressive Coded Aperture Superresolution Image Reconstruction ( the slides are here). I am also told by Ori Katz, one of the folks at Weizman performing Ghost Imaging (as mentioned here) that their reconstruction scheme is also linear. This is also the case of the coded aperture in the IR range that could be implemented in the LACOSTE program (as can be seen here and here). I think one of the main challenge of CS in the near future will be to characterize more straightforwardly how, in imaging (i.e. positive functions in 2D), nonlinear reconstruction techniques provide additional information than simpler and older linear algebra base reconstruction techniques. Andrew Ng presents his work in this video. I  have mentioned him several times before these past two years. Monday, December 22, 2008 CS: Another Nuit Blanche: Geometry and Algorithms Workshop Videos The Center for Computational Intractability has just released some of the videos of the Geometry and Algorithm workshop. Several talks are of interest to the Compressive Sensing Community (labeled with *). If you watch all of them, it's gonna be a long night again: Credit: NASA/ESA, SOHO Sunday, December 21, 2008 CS: Watching Videos on The Longest Night. Tonight is the longest night in the northern hemisphere. Let watch some videos till the end of this nuit blanche.... First, Dror Baron has given some context to his Compressed Sensing research. The reason for this blog is to put some context in the amount of raw information that is accumulating on the subject. I always appreciate it when researchers go beyond their expected output and try to put their research in perspective. Thanks Dror! At the Center of Computational IntractabilityAdi Akavia made a presentation on the SFT algorithm that can "Deterministically and Locally Finding Significant Fourier Transform Coefficients". The video of the talk is here. Also of interest are the two talks by Sanjeev Arora and Moses Charikar entitled “Semidefinite programming and approximation algorithms: a survey.” (video_part_1 and video_part_2) and the tutorial by Assaf Naor on “An Introduction to Embedding Theory”(video part 1 video part 2). The abstract of the tutorial is: This talk is intended as an introductory talk for non-experts in preparation for the upcoming Princeton workshop on metric embeddings. This workshop will focus on new directions in this area, so the purpose of the tutorial is to explain some of the relevant background. We will explain the basic definitions, show some important examples, discuss some of the fundamental theorems (with a few complete proofs), show some lower bound techniques, and describe a few applications of this mathematical discipline to computer science. No prerequisites will be assumed. Finally, a more hardware focused talk by Ramesh Raskar at ECTV'08. If you haven't had time to read his papers yet, here is a good way to get interested by watching the video. Thursday, December 18, 2008 CS: Ain't the Power of Interwebs Great or What ? Five examples on why the interwebs is great, a reader provides a CS solution to the 12-balls problem, a commenter on a blog pinpoints an obvious fact, another reader shares his paper before it can be hosted on its home site and the authors provide access to a free book and a recent presentation. * Marc Bernot, a reader of this blog, who works in the Observation Systems Research Division of the Thales Alenia Space Company, wrote the following: I was not completely surprised by the title of your last post (the one about the 12 balls problem) because I recently came up with a solution/explanation with a small taste of CS. Maybe this will be of interest to you (or your readers if you think this is sufficiently understandable). Let us consider the following matrix : M=[1 0 0 -1  0 -1  1 -1  0  1 -1  1; 0 1 0 -1 -1  0 -1  0  1  1  1 -1; 0 0 1  0 -1 -1  0  1 -1 -1  1  1] The three rows correspond to 3 weighings and the 12 columns to the twelve balls. A 1 means putting the corresponding ball on the right plate of the balance, and a -1 on the left. For example, the first row means that balls number (1,7,10,12) are put on the right and (4,6,8,11) on the left. The output of these three weighings is enough to determine which ball is different and whether it is lighter or heavier than the rest. Here is why: Let x be the vector of masses of 12 balls all weighing m but one that weighs m'. We have x=m*(1 ... 1)+(m'-m)*u where u is a vector with only one nonzero (with value 1) coordinate (it is 1-sparse). Because M has as many 1 as -1 on each row, we have: Mx = (m'-m) Mu What we have access to by weighing masses according to M, is not exactly Mx, but the sign of Mx. For example, if ball 1 is heavier, the left plate is up during the first weighing and the sign of the first coordinate of Mx is +1. So, the outputs of the weighings are converted this way : left plate up = 1, left plate down = -1, even = 0. The output is thus sign(Mx)=sign[(m'-m)Mu]. Since u is of the form (0 .. 0 1 0 .. 0), sign(Mx) represents one of the column of M or the opposite of one of the column of M. Because of the way M was constructed, there is one and only one column that corresponds to sign(Mx) (or its opposite). The number of this column is the number of the faulty ball. If sign(Mx) has the same sign as Mu, the ball is heavier, it is lighter otherwise. About the particular construction of M : Excluding (0,0,0)',(1,1,1)' and (-1,-1,-1)', there are 24 vectors in {-1,0,1}^3. If we assume that two vectors with opposite signs are equivalent, then there are only 12 classes left. M was constructed so that its 12 columns represent the 12 classes of equivalence (and the sum of elements of each row is 0). Conclusion : we were able to identify any 1-sparse vector in {-1,0,1}^12 with 3 "sign measurements". Kudos to Marc. If I am following the description of how he scales the matrix M, then the number of balls that could be checked with 20 weighings is 1,743,392,199 ( = (3^n-3)/2 ). Twenty measurements to find one unfit ball, it's like finding a needle in some haystack. The problem resolution reminds me of the 1-Bit Compressive Sensing paper by Petros Boufounos and Richard Baraniuk ( related talks are "L1 Minimization Without Amplitude Information." and 1-Bit Compressive Sensing). Of related interest is the Group Testing in the Life Sciences document of which one of the author, Nicolas Thierry-Mieg is the author of the STD algorithm that was mentioned in the Shifted Transversal Designs and Expander Graphs entry of Raghu Kainkaryam's blog. To give you an idea of the power of the web, look at the exchange in the comment section between Raghu and Radu BerindeRadu is one of the author of the  Sparse Recovery Experiments with Sparse Matrices wiki. * In the category "...If it walks like a duck and quacks like a duck...", I found the following entry in the arxivblog that features the following paper:  Ghost imaging with a single detector by Yaron BrombergOri Katz and Yaron Silberberg. (also here). The abstract reads: We experimentally demonstrate pseudothermal ghost-imaging and ghost-diffraction using only a single single-pixel detector. We achieve this by replacing the high resolution detector of the reference beam with a computation of the propagating field, following a recent proposal by Shapiro [J. H. Shapiro, arXiv:0807.2614 (2008)]. Since only a single detector is used, this provides an experimental evidence that pseudothermal ghost imaging does not rely on non-local quantum correlations. In addition, we show the depth-resolving capability of this ghost-imaging technique. If you look at the comment section, you'll notice a commentator named Wim making the connection to CS. As I replied in the comment section, if you: • Reconstruct an 300 x 300 image with 16000 measurements ( 17% reduction of the original image) • Obtain each measurement by the application of random masks and finally, • Need a specific nonlinear algorithm to do the reconstruction • and Use a single pixel (this is a not an actual condition but it clearly points to a similarity to the Rice Camera) then one can safely say that what you are doing is Compressed Sensing. It would be fascinating to see how the Gerchberg Saxton phase-retrieval algorithm mentioned in the paper compares to the recently featured Compressed Sensing Phase Retrieval of Matthew MoravecJustin Romberg and Richard Baraniuk. Let us also note the authors' interest in detecting depth as well as investigating nonlinear adaptive  measurements as witnessed in the conclusion section of the paper: Furthermore, the use of a SLM enables the implementation of closed-loop feedback schemes, potentially reducing the number of realizations needed for image reconstruction. More information and the movie of the reconstruction can be found here. * The papers accepted at ICASSP'09 are listed here, We have covered some of them before. Hadi Zayyani forwarded me this one: Bayesian Pursuit Algorithm for Sparse Representation by Hadi ZayyaniMassoud Babaie-Zadeh and Christian Jutten. The abstract reads: In this paper, we propose a Bayesian Pursuit algorithm for sparse representation. It uses both the simplicity of the pursuit algorithms and optimal Bayesian framework to determine the active atoms in the sparse representation of the signal. We show that using the Bayesian Hypothesis testing to determine the active atoms from the correlations, leads to an efficient activity measure. Simulation results show that our suggested algorithm has the best performance among the algorithms which are implemented in our simulations. If one were to decompose the signal in wavelets, it looks as though this approach would be very similar to the recent BOMP algorithm featured in Block-Sparsity: Coherence and Efficient Recovery by Yonina Eldar and Helmut Bolcskei. Hadi tells me that he is going to post an implemention of his algorithm on his page a little later. I am going to add the algorithm to the Big Picture page as soon as the algorithm is available. Aapo Hyvärinen, Jarmo Hurri, and Patrik Hoyer is released free for the time being. Piotr Indyk  and Milan Ruzic just released the slides of their FOCS presentation entitled Near-Optimal Sparse Recovery in the L1 norm featuring some of the results of the Sparse Recovery Experiments with Sparse Matrices. Credit Photo: Igor Carron. Coast of Greenland. Wednesday, December 17, 2008 CS: Group Testing and Sparse Signal Recovery, A linear solution to the 12-balls problem The first time I became aware of Group Testing was when I was confronted to the problem described here by Robert H. Thouless in a paper entitled "The 12-Balls Problem as an Illustration of the Application of Information Theory" [1]: You have 12 balls of the same size. 11 of these balls are the same weight, one is different; you do not know whether this ball is heavier or lighter than the others. You have also a balance on which any group of the balls can be weighed against any other group. How can you discover by means of three weighings which is the different ball and whether it is heavier or lighter than the other balls ? I was slow and young (I am now slow and older:-)) and it took me a whole day to figure it out. Ever since I have been looking into compressed sensing, I have been looking for an introductory paper connecting group testing and compressed sensing. That paper has come out and is entitled: Group Testing and Sparse Signal Recovery by Anna GilbertMark Iwen and  Martin Strauss. The abstract reads: Traditionally, group testing is a design problem. The goal is to design an optimally efficient set of tests of items such that the test results contain enough information to determine a small subset of items of interest. It has its roots in the statistics community and was originally designed for the Selective Service during World War II to remove men with syphilis from the draft [5]. It appears in many forms, including coin-weighing problems, experimental designs, and public health. We are interested in both the design of tests and the design of an efficient algorithm that works with the tests to determine the group of interest because many of the same techniques that are useful for designing tests are also used to solve algorithmic problems in compressive sensing, as well as to analyze and recover statistical quantities from streaming data. This article is an expository article, with the purpose of examining the relationship between group testing and compressive sensing, along with their applications and connections to sparse function learning. Until today, I thought that the solution to the 12-ball problem was an adaptative one as my solution had the second weighing dependent on the results of the first weighing. It turns out that there is also a non-adaptive solution to it, as described here (my solution is the tree-like solution below the non-adaptive solution. Ain't the interwebs great ?) In other words, a CS-like linear measurement technique seems to be doing as well as a nonlinear one! Reference: [1] The 12-Balls Problem as an Illustration of the Application of Information Theory, Robert H. Thouless, The Mathematical Gazette, Vol. 54, No. 389 (Oct., 1970), pp. 246-249 Credit: NASA/JPL/Space Science Institute, Enceladus Tuesday, December 16, 2008 CS: Comments on the LB101-M chip, NIPS'08, Compressed sensing in MRI with non-Fourier encoding Yesterday, I jumped to conclusions in saying that the LB-101M chip was performing the algorithm mentioned in the presentation of Stephane Mallat at ECTV'08. After asking him specifically that question, he answered back stating the LB-101M chip did not implement the algorithm of the presentation as I initially stated. He also stated that the chip implemented a somewhat smarter algorithm given the hardware constraints. In the description one can see these constraints, the chip "was implemented on a small Altera ™ Cyclone-II 70 FPGA, with only 70000 logic elements." If you have ever worked with an FPGA, you know how much space a multiplication takes, and one can only be amazed at what this chip is doing. The NIPS '08 online papers are out. From the Rice Compressive Sensing Repository, there is a new paper I have not covered before: Compressed sensing (CS) has inspired significant interest because of its potential to reduce data acquisition time. There are two fundamental tenets to CS theory: (1) signals must be sparse or compressible in a known basis, and (2) the measurement scheme must satisfy specific mathematical properties (e.g., restricted isometry or incoherence properties) with respect to this basis. While MR images are often compressible with respect to several bases, the second requirement is only weakly satisfied with respect to the commonly used Fourier encoding scheme. This paper explores the possibility of improved CS-MRI performance using non-Fourier encoding, which is achieved with tailored spatially-selective RF pulses. Simulation and experimental results show that non-Fourier encoding can significantly reduce the number of samples required for accurate reconstruction, though at some expense of noise sensitivity. Theoretical guarantees and their limitations in MRI applications are also discussed. Rachel Ward renamed "Cross validation in compressed sensing via the Johnson-Lindenstrauss Lemma" into   Compressed sensing with cross validation Monday, December 15, 2008 CS: Very High Speed Incoherent Projections for Superresolution, and a conference. Stephane Mallat made a presentation at ETCV'08 (I mentioned it before). As some of you may know Stephane Mallat is known for his contribution to development of the wavelet framework.  He has also been involved in a start-up in the past few years. The start-up, initially called Let It Wave devised technologies around the bandelet families of functions (in particular in collaboration with Gabriel Peyre). It looks as though, the latest development of this start-up has been the conversion of today's videos into High Definition video. In that presentation entitled Sparse Geometric Superresolution, Stephane mentions that the challenge of up-conversion involves the ability to produce an increase of 20 times the number of initial information in the low resolution video. As he explains, the chip that is supposed to produce this feat should cost $5 and it also has to use fast algorithms and matching pursuit don't do well in that very high speed conversion. The presentation in the video is very interesting in detailing the method. The thing I note is the thing he doesn't say in these exact words: In order to do a fast search in a dictionary, he projects the low resolution video onto an incoherent basis. From that projection, he uses a comparison between that projection and the projections of a very large dictionary unto that incoherent basis and does pattern matching. A person has asked me before about doing CS with JPEGs: I guess one can do that for that purpose (superresolution). In a different area, we have seen this type of procedure being undertaken by Jort Gemmeke in speech problems ( Using sparse representations for missing data imputation in noise robust speech recognition by Jort Gemmeke and Bert Cranen ). At 7.5 Gbit/s, I wonder how we could use the LB-101M chip to do all kinds of superresolution beyond images....I think it also fits the description of hardware performing CS so I will include it in the CS hardware page. [ Update: I am told by Stephane Mallat that the LB-101M chip does not implement this algorithm. It implements a somewhat smarter algorithm given the hardware constraints] I also found it funny that Stephane mentioned the testing procedure used by experts to gauge whether a new imaging technology is acceptable. They simply go into a room and watch images and movies for hours with the new technology and provide some feedback on a how good or bad it is. That reminded of how Larry Hornbeck described to us how he had to make the DMD technology acceptable for imaging/projector technology. The technology is now integrated in DLP projectors but it took a while before the algorithm commanding (the DMD controller) the mirrors on the DMDs got a good feedback from the professionals. Larry had become an expert over time, and his team at Texas Instrument was constantly checking with him while they were improving the algorithm. At one point, the team seemed a little miffed that Larry would always find problems when they did not seem to exist. They then switched technologies in the projection room but Larry would still find the same problems. At which point, Larry and his team decided the DMD technology had indeed matured to the point that it would be hard for experts to find flaws. Larry received an Emmy awards in 1998 for the development of this technology. The DMD is also at the heart of the Rice single pixel camera. Larry gave me one of his demonstration DMD at the end of his presentation! woohoo. On a totally unrelated note, a conference that features Compressed Sensing as a subject of interest is CHINACOM 2009: Int'l Conference on Communications and Networking in China, Information and Coding Theory Symposium, August 26-28, 2009, Xi'an, China. Saturday, December 13, 2008 CS: Freefalling, Hardware at 130,000 feet, Statistics on a cartoon, and a Silicon detector. Here is a fantastic rendering of what it would look like to fall from Space (for those of you reading this through e-mail or an RSS feed, here is the direct link) Credit: Kyle Botha. This reminds me that you don't need a rocket to go up. Stratospheric balloons do that very well as can be seen from the view of a webcam that was on-board the HASP last year. Credit: CosmoCam For those of you interested in having some hardware at those altitudes, you may be interested in the new request for proposal by the HASP folks. The deadline is Dec. 19th. The first cartoon video on CS has reached more than 1000 viewers (950 times being me hitting that replay button). The statistics provided by Youtube provide some insight about when the interest of the reader was the highest. I am not sure how they compute this but the first bump in this video is when CS is connected to underdetermined systems and linear algebra 101. The video can be viewed here: Finally, a video on a type of photon detector made out of Silicon that seems to have reached some breakthrough. More pixels, lower cost is our future, how can we integrate this in new types of CS cameras ? Friday, December 12, 2008 CS: Greedy Signal Recovery Review, Motion Segmentation, Efficient Sampling and Stable Reconstruction, I found three papers related to CS: The two major approaches to sparse recovery are L1-minimization and greedy methods. Recently, Needell and Vershynin developed Regularized Orthogonal Matching Pursuit (ROMP) that has bridged the gap between these two approaches. ROMP is the first stable greedy algorithm providing uniform guarantees. Even more recently, Needell and Tropp developed the stable greedy algorithm Compressive Sampling Matching Pursuit (CoSaMP). CoSaMP provides uniform guarantees and improves upon the stability bounds and RIC requirements of ROMP. CoSaMP offers rigorous bounds on computational cost and storage. In many cases, the running time is just O(N logN), where N is the ambient dimension of the signal. This review summarizes these major advances. ROMP and CoSaMP codes are accessible from the reconstruction section of the Big Picture. Let us also note the importance of the Restricted Isometry Constant. Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff ismuch lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis. The attendant talk is here. In a different direction, Shankar RaoRoberto TronRene Vidal, and Yi Ma just released In this paper, we study the problem of segmenting tracked feature point trajectories of multiple moving objects in an image sequence. Using the affine camera model, this problem can be cast as the problem of segmenting samples drawn from multiple linear subspaces. In practice, due to limitations of the tracker, occlusions, and the presence of nonrigid objects in the scene, the obtained motion trajectories may contain grossly mistracked features, missing entries, or corrupted entries. In this paper, we develop a robust subspace separation scheme that deals with these practical issues in a unified mathematical framework. Our methods draw strong connections between lossy compression, rank minimization, and sparse representation. We test our methods extensively on the Hopkins155 motion segmentation database and other motion sequences with outliers and missing data. We compare the performance of our methods to state-of-the-art motion segmentation methods based on expectation-maximization and spectral clustering. For data without outliers or missing information, the results of our methods are on par with the state-of-the-art results, and in many cases exceed them. In addition, our methods give surprisingly good performance in the presence of the three types of pathological trajectories mentioned above. All code and results are publicly available at http://perception.csl.uiuc.edu/coding/motion/. Yi Ma and his team continue on doing wonders. Here they use the result in rank minimization and CS to provide robust trackers over missing data. And finally, here is a section of papers and presentations not strictly exactly focused on Compressed Sensing but rather that make use or reference Compressive as a neighboring technique. There the presentation entitled Pseudospectral Fourier reconstruction with IPRM by Karlheinz Gröchenig and Tomasz Hrycak. Also, Arxiv now allows a full search in the text of the papers in its library. Here is a list of paper that mention Compressive Sensing without Compressive Sensing being the main subject of the paper: Thursday, December 11, 2008 CS: Compressed Sensing Phase Retrieval, Deterministic SFT, A generalization of Kolmogorov’s theory of n-widths for infinite dimensional spaces, a job. From Wikimization, here is a newer version of Toward 0-norm Reconstruction, and a Nullspace Technique for Compressive Sampling as presented by Christine Law with Gary Glover at the Linear Algebra and Optimization Seminar (CME510), iCME, Stanford University, November 19, 2008. Unlike what I said earlier, the paper entitled Compressed Sensing Phase Retrieval with Matthew Moravec, Justin Romberg and Richard Baraniuk can be found here. Adi Akavia was speaking on Monday, December 8 2008 at MIT. The talk presentation reads: Computing the Fourier transform is a basic building block used in numerous applications. For data intensive applications, even the O(N log N) running time of the Fast Fourier Transform (FFT) algorithm may be too slow, and sub-linear running time is necessary. Clearly, outputting the entire Fourier transform in sub-linear time is infeasible, nevertheless, in many applications it suffices to find only the \tau-significant Fourier transform coefficients, that is, the Fourier coefficients whose magnitude is at least \tau-fraction (say, 1%) of the energy (ie, the sum of squared Fourier coefficients). We call algorithms achieving the latter SFT algorithms. In this work we present a *deterministic* algorithm that finds the$\tau\$-significant Fourier coefficients of functions f over *any finite abelian group G* in time polynomial in log|G|, 1/\tau and L_1(f) (for L_1(f) denoting the sum of absolute values of the Fourier coefficients of f). Our algorithm is robust to random noise. Our algorithm is the first deterministic and efficient (ie, polynomial in log|G|) SFT algorithm to handle functions over any finite abelian groups, as well as the first such algorithm to handle functions over Z_N that are neither compressible nor Fourier-sparse. Our analysis is the first to show robustness to noise in the context of deterministic SFT algorithms. Using our SFT algorithm we obtain (1) deterministic (universal and explicit) algorithms for sparse Fourier approximation, compressed sensing and sketching; (2) an algorithm solving the Hidden Number Problem with advice, with cryptographic bit security implications; and (3) an efficient decoding algorithm in the random noise model for polynomial rate variants of Homomorphism codes and any other concentrated and recoverable codes. Hum.. (I put the emphasis in the text) here is another statement along the lines that sparsity is not being the main prerequisite condition for subsampling. I look forward to seeing more of that paper. On December 17-19, 2008, at the Workshop on Optimization and Applications, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences in Sofia, Bulgaria, Ognyan Kounchev will give a talk entitled A generalization of Kolmogorov’s theory of n-widths for infinite dimensional spaces: applications to Compressive sensing. The abstract reads: Recently, the theory of widths has got high interest due to its importance for the so-called Compressive sensing, see the works of D. Donoho, E. Candes, T. Tao, R. Baraniuk and others. On the other hand, the theory of n-widths of Kolmogorov (a kind of Optimal recovery theory) is not appropriate for the spaces of functions depending on several variables - this is seen from the simplest examples which one would like to have treated by a theory of Optimal recovery. We generalize the theory of n-widths of Kolmogorov by considering approximation by infinite-dimensional spaces of functions which have so far ”harmonic dimension n” in some sense. A large class of such spaces having ”harmonic dimension n” is provided by the solutions of elliptic differential operators of order 2n. We indicate possible applications to Compressive sensing. Here is a new job posting for a post-doc in the group of Jean-Luc Starck at CEA near Paris. The description reads: JOB OPPORTUNITY Title of the job : Post-doctoral position (3 years), Signal/Image processing at CEA Saclay, Service d’Astrophysique. The Service d'Astrophysique (SAp) at CEA Saclay invites applications for a postdoctoral appointment in the area of data analysis/image processing of astronomical data to work with Jean-Luc Starck. The CEA Saclay is a government research center situated 40 minutes from central Paris, France. The SAp has a wide interest in astrophysics ranging from planets to cosmology, with a specialisation in space missions (eg. Euclid, XMM-Newton, Herschel, PLANCK, JWST, Integral etc) and instrumentation (eg. Megacam on the Canada-France-Hawaii Telescope). The position is to work on sparse representation of astronomical data for different applications such as non-Gaussianity detection, inverse problem and compressed sensing. Candidates should have a PhD in Image processing, Physics or Astronomy. Previous experience in sparse representations (wavelet, curvelets, etc) and the development of data analysis methods is preferred, but experience in related areas is also suitable. The position, funded for at least 3 years (up to 5 years), will be renewed on a yearly basis depending on scientific progress and achievement. The gross minimum salary will be 34,000€ annually (~2,260€ net per month), and will be adjusted according to experience and family situation. A minimum of 5,000 € per year of travel money for each position will also be provided, in addition to the usual funding support of any French institution (medical insurance, etc). Applicants are requested to send a CV, a list of publications, and a brief statement of research interests. This material together with three letters of reference should be sent by the closing date to Jean-Luc Starck Laboratoire Astrophysique des Interactions Multi-échelles Irfu, Service d'Astrophysique, CEA Saclay, F-91191 GIF-Sur-YVETTE CEDEX, France. Email: check the flyer for more information. The closing date for receipt of applications : February 28, 2009 The job posting has been to the CSJobs page. The ICPR 2010 conference has Compressed Sensing as a subject of interest. Image Credit: NASA/JPL/Space Science Institute, image of Tethys taken on December 9th, 2008. Wednesday, December 10, 2008 CS: Compressive Sensing Technology Watch (Part Two) Here is a presentation on the influence of GPUs in Compressive Sensing. It is located at: I also had to reframe the cultural references. Instead of Coluche a well known french artist, I replaced him with a more U.S centered cultural reference: Seinfeld's George Constanza. I explained previously why in CS: Coded Mask Imagers: What are they good for ? The George Costanza "Do the Opposite" Sampling Scheme. Here is the video.
Technical Article # Digital Signal Processing in Scilab: How to Decode an FSK Signal September 25, 2018 by Robert Keim ## Learn about a DSP technique that extracts the original digital data from a demodulated frequency-shift-keying baseband signal. Learn about a DSP technique that extracts the original digital data from a demodulated frequency-shift-keying baseband signal. #### Previous Articles on Scilab-Based Digital Signal Processing One of the methods used to encode binary data in a sinusoidal waveform is called frequency shift keying (FSK). It’s a simple concept: one frequency represents a zero, and a different frequency represents a one. For example: A low-frequency FSK signal (say, in the tens of kilohertz) can be shifted to a higher frequency and then transmitted. This is an effective and fairly straightforward way to create an RF system that achieves wireless transmission of digital data—assuming that we have a receiver that can convert all these sinusoidal waveforms back into ones and zeros. The process of extracting digital data from a transmitted FSK signal can be divided into two general tasks: First, the high-frequency received signal is converted to a low-frequency baseband signal. I refer to this as “demodulation.” Second, the baseband waveform must be converted into ones and zeros. I don’t think that it would be incorrect to call this second step “demodulation,” but to avoid confusion I’ll always use the term “decoding” when I’m talking about converting lower-frequency analog waveforms to digital bits. ### Decoding in Software For systems with moderate data rates, it is perfectly feasible to digitize an FSK baseband signal and perform decoding in software. (You can check out our introduction to software-defined radio for more information on RF systems that implement important signal-processing tasks in software.) This is an excellent approach, in my opinion, because it allows the receiver to benefit from the versatility of digital signal processing, and it also provides a convenient way to record and analyze received signals during testing. In this article, we’ll use Scilab to decode an FSK signal, but the computations involved are not complicated and could easily be implemented as C code in a digital signal processor. ### First Things First: The Math Our technique for decoding FSK is based on the multiplication of sinusoidal signals. Consider the following trigonometric identities: $$\sin(x)*\sin(y)=\frac{1}{2}(\cos(x-y)-\cos(x+y))$$ $$\cos(x)*\cos(y)=\frac{1}{2}(\cos(x-y)+\cos(x+y))$$ Let’s make this more consistent with the engineering world by using ω1t and ω2t instead of x and y. $$\sin(\omega_1t)*\sin(\omega_2t)=\frac{1}{2}(\cos((\omega_1-\omega_2)t)-\cos((\omega_1+\omega_2)t))$$ $$\cos(\omega_1t)*\cos(\omega_2t)=\frac{1}{2}(\cos((\omega_1-\omega_2)t)+\cos((\omega_1+\omega_2)t))$$ (Note that we are ignoring the effect of phase differences; in this article we’re assuming that all the signals have equal phase.) We can multiply two sine waves or two cosine waves, and the result consists of two cosine waves, with frequencies equal to the sum and the difference of the two original frequencies. The critical observation here is that the cos((ω1–ω2)t) waveform will have a very low frequency if the two input waves have a very similar frequency. In the idealized mathematical realm, we could input two waveforms of identical frequency and cos((ω1–ω2)t) becomes cos(0t) = cos(0) = 1. Thus, if we multiply two sine waves or two cosine waves of equal frequency, the resulting waveform will have a relatively large DC offset. In the context of decoding FSK, we can say the following: Even if the frequencies are similar, rather than identical, there will still be a large DC offset because the cos((ω1–ω2)t) waveform will start at 1 and decrease very slowly relative to one bit period. The bit period is the amount of time required to encode one digital bit; in the diagram above, the bit period corresponds to one cycle of the binary 0 frequency (or three cycles of the binary 1 frequency). The portion of the analog waveform that is contained in one bit period is called a symbol. In this article we’re using binary (i.e., two-frequency) FSK, and consequently one symbol corresponds to one digital bit. It is possible to use more than two frequencies, such that one symbol can transfer multiple bits. ### Decoding FSK, Step by Step We now have the information we need to formulate an FSK decoding procedure: 1. Digitize the received baseband signal. 2. Identify the beginning of the bit period. This can be accomplished with the help of a training sequence; for more information, click here and scroll down to the “Preamble” heading in the “Anatomy of a Packet” section. For this article we’ll assume that the data was encoded as sine waves (as shown in the diagram above) rather than cosine waves. 3. Multiply each symbol by a sine wave with the binary 0 frequency and by a sine wave with the binary 1 frequency. 4. Calculate the DC offset of each symbol. 5. Choose a threshold and decide between binary 0 and binary 1 based on whether the DC offset of the symbol is above or below the threshold. ### The Scilab Implementation We’ll start by generating one-symbol sine waveforms at the binary 0 frequency (10 kHz) and the binary 1 frequency (30 kHz). ZeroFrequency = 10e3; OneFrequency = 30e3; SamplingFrequency = 300e3; Samples_per_Symbol = SamplingFrequency/ZeroFrequency; n = 0:(Samples_per_Symbol-1); Symbol_Zero = sin(2*%pi*n / (SamplingFrequency/ZeroFrequency)); Symbol_One = sin(2*%pi*n / (SamplingFrequency/OneFrequency)); plot(n, Symbol_Zero) plot(n, Symbol_One) Now let’s create the received baseband signal. We can do this by concatenating Symbol_Zero and Symbol_One arrays; we’ll use the sequence 0101: ReceivedSignal = [Symbol_Zero Symbol_One Symbol_Zero Symbol_One]; plot(ReceivedSignal) Next, we multiply each symbol in the received signal by the waveform for a binary 0 symbol and by the waveform for a binary 1 symbol. We accomplish this step by concatenating Symbol_Zero and Symbol_One arrays according to the number of symbols in the received signal and then using element-wise multiplication; refer to this article for more information on element-wise multiplication in Scilab (or MATLAB). Decoding_Zero = ReceivedSignal .* [Symbol_Zero Symbol_Zero Symbol_Zero Symbol_Zero]; Decoding_One = ReceivedSignal .* [Symbol_One Symbol_One Symbol_One Symbol_One]; plot(Decoding_Zero) plot(Decoding_One) Don’t be distracted by these rather complicated waveforms; all we’re interested in is the DC offset, which in mathematical terms is simply the mean. If we want to display the DC offset corresponding to each symbol, we first need to generate some new arrays: for k=1:(length(Decoding_Zero)/Samples_per_Symbol) > SymbolOffsets_Zero(((k-1)*Samples_per_Symbol)+1:k*(Samples_per_Symbol)) = mean(Decoding_Zero(((k-1)*Samples_per_Symbol)+1:k*(Samples_per_Symbol))); > end for k=1:(length(Decoding_One)/Samples_per_Symbol) > SymbolOffsets_One(((k-1)*Samples_per_Symbol)+1:k*(Samples_per_Symbol)) = mean(Decoding_One(((k-1)*Samples_per_Symbol)+1:k*(Samples_per_Symbol))); > end You might need to ponder these commands a bit to understand exactly what I’m doing, but here’s the basic idea: The for loop is used to progress one symbol at a time through the Decoding_Zero and Decoding_One arrays. In the SymbolOffsets_Zero and SymbolOffsets_One arrays, all the data points corresponding to one symbol are filled with the mean of the relevant symbol in the Decoding_Zero and Decoding_One arrays. We have 30 samples per symbol, so the first command operates on array values 1 to 30, the next operates on array values 31 to 60, and so forth. Here are the results: plot(SymbolOffsets_Zero) plot(SymbolOffsets_One) The SymbolOffsets_Zero array shows us the DC offset resulting from the multiplication of the received baseband symbol with the binary 0 frequency, and the SymbolOffsets_One array shows us the DC offset resulting from the multiplication of the received baseband symbol with the binary 1 frequency. We know that multiplying two similar frequencies will result in a relatively large DC offset. Thus, a value of 0.5 in the SymbolOffsets_Zero array indicates that the received symbol was a binary 0, and a value of 0.5 in the SymbolOffsets_One array indicates that the received symbol was a binary 1. ### Conclusion This article presented a mathematical approach for decoding FSK. The procedure was implemented in Scilab, but it would not be difficult to translate the Scilab commands into a high-level programming language such as C. We’ll continue working with FSK decoding in the next article. 1 Comment
# A problem in algebraic number theory I'm trying to do the homework for a course I found online. A problem on the first homework goes as follows: Suppose A is an integral domain which is integrally closed in its fraction field K. Suppose q in A is not a square, so that K(sqrt(q)) is a quadratic extension of K. Describe the conditions on r,s in K which are necessary and sufficient for r+s*sqrt(q) to be integral over A in L. I have absolutely no clue how to approach this as A is not even assumed to be a UFD. The proof for A=Z uses the fact that Z is a UFD, so the minimal polynomial over the fraction field equals the minimal polynomial over A for every integral element (Gauss lemma). Does anyone have any ideas on how to approach this? Thanks. Hurkyl Staff Emeritus Gold Member I confess I don't see the problem -- I suppose I've been doing this stuff for too long (or maybe not long enough?). Can you narrow it down? What is the argument you would like to use and where do you think it might not work? I confess I don't see the problem -- I suppose I've been doing this stuff for too long (or maybe not long enough?). Can you narrow it down? What is the argument you would like to use and where do you think it might not work? Ok, so I know that an element of that form satisfies the equation: x^2-2rx+r^2-s^2q For a UFD, this would also have to be the polynomial giving the smallest integral relation for r+s*sqrt(q) over A. Thus, we are reduced to when these coefficients belong to A, which gives us conditions on r and s. The only reason I know this works for a UFD is because given a monic irreducible polynomial f(X) over A[X] having r+s*sqrt(d) as root, then the minimal polynomial over the fraction field divides this in K[X], but Gauss' lemma tells us that f(X) is irreducible, so f(X) equals the minimal polynomial. This reduces the problem above to checking that the coefficients of the polynomial I have written down are actually in A. I don't see how this approach can be used for a general integral domain. If there is some approach using some theorems I don't know about please tell me, I'd like to do some reading about those (this is my first exposure to this subject). Hurkyl Staff Emeritus Gold Member Hrm. Is this an equivalent statement of what's giving you trouble? You're worried that the following two statements might be true: • x^2-2rx+r^2-s^2q is not an element of A[X] • r+s*sqrt(q) is a root of some monic higher degree polynomial in A[X] I'm pretty sure "integrally closed" tells us this is impossible, but I don't recall the precise details. (and FYI, I'm about to leave) Yup, that's exactly the problem I have. I actually figured out why this is impossible... thanks.
# Convert Complex Chinese Numbers into Arabic Numbers Your task is to convert Chinese numerals into Arabic numerals. A problem similar to Convert Chinese numbers, however, more complex. Also, answers given there mostly don't satisfy all the conditions. Chinese digits/numbers are as follows: 0 零 1 一 2 二 2 两 3 三 4 四 5 五 6 六 7 七 8 八 9 九 10 十 100 百 1000 千 10000 万 10^8 亿 ### Multiple-digit numbers Multiple-digit numbers are created by adding from highest to lowest and by multiplying from lowest to highest. In case of additions, each number higher than 9 can be multiplied by 1 and it won't change its meaning. Both 亿万千百十一 and 一亿一万一千一百一十一 are equal to 100011111. We multiply in the following fashion: 五千 = 5000 一百万 = 1000000 三千万 = 30000000. Chinese always takes the lowest possible multiplier (just like we don't say hundred hundred but ten thousand). So 百千 doesn't exist to represent 100000 since we have 十万, 十千 doesn't exist since we have 万, 十千万 doesn't exist, since we have 亿, 十百 doesn't exist, since we have 千. ### Special cases 0 is very important and it was actually the biggest problem in the other code golf question. Trailing zeroes are omitted in Chinese, so 零 indicates interior zeroes. Let's look at some examples: • 三百零五 = 305 • 三百五 = 350 - no interior zeroes. You can notice that we don't need 十 here, since a trailing zero is omitted. • 一千万零一百 = 10000100 • 三千零四万 = 30040000 • 六亿零四百零二 = 600000402 - here we have 2 interior zeroes. As you can see though, even if there's a gap of more than one order of magnitute (in the example it's 亿 and 百), two 零s can't stand next to each other, one is enough for each gap, no matter how big it is. • 一亿零一 = 100000001 - again, no need for more than one 零 if there's one gap, no matter how big. • 八千万九千 = 80009000 - no need for 零 since there are no interior zeroes. Why are there no interior zeroes? Because it follows the highest-to-lowest addition without omitting an order of magnitude. Right after 万 we have 千 (九 is a multiplication component, not addition one) and not, let's say, 百. More examples: Check out the two "示例" paragraphs 2 is also special in Chinese as it can be represented with a character 两 if it's a multiplier of 100 and higher numerals. Both 两千两百二十二 and 二千二百二十二 are 2222. ## Rules Constraints: 0 <= N < 10^9 Edit: I don't care what happens from 10^9 onwards. Input doesn't have any examples equal or higher than 10^9 for that reason. ## Test cases Input: 一亿两千三百零二万四千二百零三 Output: 123024203 40000010 345500 400200901 20009010 240222 2012 0 Good luck! • Chinese speaker here - I don't think I ever use 零 for interior zeroes unless the place value isn't specified - ex. 350 (三百五) vs 305(三百零五). Otherwise, the position is clearly indicated by the base characters.... (ex. 2012 would be 两千十二 which literally reads two thousand ten two, no need for interior zeroes) – Quintec Nov 17 '19 at 16:39 • Thank you for your response! You are right that if a zero is before anything other than a unit digit it's optional (but not prohibited). Let's take the 2012 case as an example. It can be both. If you look at chapter numbers for internet novels, they indeed always add 零. – Rysicin Nov 17 '19 at 16:50 • So just to clarify - is it ok to output with or without? You should specify this in the question – Quintec Nov 17 '19 at 16:52 • The output is Arabic numerals, in the input there are many cases with 零. As long as the input matches the output, it's correct. Your code should therefore be able to correctly interpret an example with 零 inside, while also output 0 for 零 as input. – Rysicin Nov 17 '19 at 17:01 • I'm not sure if I understand the question, as your second example returns a number higher than a trillion. The first one is correct. Maybe you can think of it as building blocks? (1)[10^8] (2302)[10000] (4)[1000] (2)[100] (0)[10] (3)[1] Edit: Of course the (2302) block creator is also made of a smaller block - (2)[1000] (3)[100] (0)[10] (2)[1] – Rysicin Nov 17 '19 at 19:16 # 05AB1E, 52 51 bytes 0•9∊»£¬ƵçoiKβÐg•19вIÇƵª%èvX¨UOy¬iθ°UX‰¤_ªX*]DgiX*}O Try it online! 0 # literal 0 •9∊»£¬ƵçoiKβÐg•19в # compressed list [2, 14, 0, 3, 0, 8, 0, 6, 2, 9, 0, 18, 0, 1, 0, 13, 5, 0, 10, 0, 0, 7, 12, 4] IÇ # codepoints of the input Ƶª% # modulo 270 è # index into the above list, with wraparound v ] # for each number y in that list: X # push the variable X ¨ # drop the last digit U # store that in X O # sum all numbers on the stack y # push y ¬i ] # if the first digit of y is 1: θ° # 10**(the last digit of y) U # store that in X X‰ # divmod X ¤_ª # if the modulo is 0, append 1 X* # multiply by X Dgi } # if length of the top of stack is 1: X* # multiply it by X O # sum of the stack # implicit output • The output is wrong for 一亿两千三百零二万四千二百零三 and will be wrong for anything that ends with 零2-9. – Rysicin Nov 18 '19 at 22:41 • Also, it doesn't meet the upper constraint. I put it in place for 3 reasons: because the numbers are already high enough as it is; it just gets repetitive; we encounter a problem similar to long-short scale for arabic numbers (e.g. billion). The next numeral after 亿 is 兆. However, it can mean both 10^6, 10^12 or 10^16 depending on context. I think due to this confusion you would more often see 万亿 instead of 兆 in the context of e.g. trillion yuan. There's also an abberation of 亿亿 after this point... So let's just stop at 10^9. – Rysicin Nov 18 '19 at 23:28 • @Rysicin Things to avoid when writing challenges: Input validation. By default, it is assumed that all inputs are valid. If you really want to make input validation part of the question, you should explicitly state so in the question and add several test cases covering it. – Grimmy Nov 19 '19 at 0:49 • @Rysicin I fixed the issue with numbers ending with 0. – Grimmy Nov 19 '19 at 0:54 • Good job, looks like everything is correct! Thanks for the feedback, I'll remove constraints and just write that I don't care about what happens starting 10^9. – Rysicin Nov 19 '19 at 1:00
# Difference between revisions of "User:JulianEskin" I am the former technician in the Silver Lab (2004-2007). E-mail: firstname.lastname@gmail.com] Back to Lab Members Back to Silver Lab ${\displaystyle \sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}$
# A submarine emits a sonar pulse, which returns from an underwater cliff in 1.02 s. If the speed of sound in salt water is 1531 m/s, how far away is the cliff? Given, The time taken by the SONAR pulse to go from the submarine to cliff will be half of this time: $= \frac{1.02}{2} = 0.51\ second.$ And we know that, $Distance = Speed \times Time$ Substituting the values in the above equation, we obtain $Distance = 1531\ m/s \times 0.51s = 780.8\ m$ ## Related Chapters ### Preparation Products ##### JEE Main Rank Booster 2021 This course will help student to be better prepared and study in the right direction for JEE Main.. ₹ 13999/- ₹ 9999/- ##### Rank Booster NEET 2021 This course will help student to be better prepared and study in the right direction for NEET.. ₹ 13999/- ₹ 9999/- ##### Knockout JEE Main April 2021 (Easy Installments) An exhaustive E-learning program for the complete preparation of JEE Main.. ₹ 4999/- ##### Knockout NEET May 2021 An exhaustive E-learning program for the complete preparation of NEET.. ₹ 22999/- ₹ 14999/-
# Calculus 2 integrals ## Homework Statement Here are the problems http://imgur.com/a/kbtPS The problems I need help with are 1 and 4(a) ## Homework Equations The second fundamental theorem of calculus ## The Attempt at a Solution For problem 1, I calculated the areas under the curve (using remmien summs) and tried to find the x value that would approximate f(x)=-.75 For problem 4(a), all i did was that since it was an integral, taking the derivative would just be the integrals and that's how I got that answer.[/B]
# Homework Help: Optics; Deriving the index of refraction from Snell's law 1. Jan 26, 2013 ### steve233 1. The problem statement, all variables and given/known data There is a diagram in the problem statement so here is a link to the image of the problem: http://imgur.com/KDrRsyO 2. Relevant equations Snell's Law: $n_{1} * sin(\theta_{1}) = n_{2} * sin(\theta_{2})$ 3. The attempt at a solution My attempt using Snell's law fails because I can't see how it is possible that n ever becomes squared. It looks like either some sort of vector addition or some use of Pythagorean theorem but I haven't been able to determine how to use those ideas to solve the problem. I can see some use of special triangles given the angles are 60 degrees, but it's a bit overwhelming and my trig skills are a bit rusty. Thanks. 2. Jan 26, 2013 ### TSny Start by writing out Snell's law for each of the two refractions. 3. Jan 26, 2013 ### tiny-tim hi steve233! hint: θ2 + θ3 = … ? 4. Jan 26, 2013 ### steve233 Hmm... So here is another attempt: 1st refraction: $n_{1} * sin(\theta_{1}) = n * sin(\theta_{2})$ 2nd refraction: $n * sin(\theta_{3}) = n_{1} * sin(\theta)$ Here I assume that the refractive index is the same when the light is outside of the prism and inside of the prism. I think I'm wrong here but, $\theta_{2} + \theta_{3} = 60^{\circ}$. I think this is wrong because it seems like $90^{\circ} - \theta_{2}$ = $60^{\circ} \Longrightarrow \theta_{2} = 30^{\circ}$. I'm still stuck on how things become squared though... Last edited: Jan 26, 2013 5. Jan 26, 2013 ### TSny Ok. You can assume that outside the prism you just have air where $n_1 ≈ 1.00$. You also know the value of $\theta_1$. So, you have three unknowns: $\theta_2$, $\theta_3$ and $n$. [Edit: That is, you need to eliminate $\theta_2$ and $\theta_3$ so that you can express $n$ in terms of $\theta$.] I'm not following you here. $n_1$ is for air and $n$ is for the prism material. Not sure how you got that, but I think it's right! So, that's your third equation. I don't see how you are deducing that. In fact, I don't think there is any way to find the values of $\theta_2$ and $\theta_3$ individually since they would depend on $n$. The square will show up in doing the algebra of solving for $n$ in terms of $\theta$. Last edited: Jan 26, 2013 6. Jan 26, 2013 ### steve233 $n_{1}$ is indeed the refractive index of air, I should have mentioned that. So the equations become: (1) $\sqrt{3} / 2 = n * sin(\theta_{2})$ (2) $n * sin(\theta_{3}) = sin(\theta)$ (3) $\theta_{2} + \theta_{3} = 60^{\circ}$ $\theta_{3} = 60^{\circ} - \theta_{2}$ $n * sin(\theta_{3}) = n * sin(60^{\circ} - \theta_{2}) = n * (sin(60)cos(\theta_{2}) - cos(60)sin(\theta_{2}))$ $\theta_{2} = arcsin(\sqrt{3}/(2 * n))$ So, $n * (sin(60)cos(\theta_{2}) - cos(60)sin(\theta_{2}))$ $n * ((\sqrt{3} / 2) * cos(arcsin(\sqrt{3}/(2 * n)) - (1/2) * sin(arcsin(\sqrt{3}/(2 * n))) = sin(\theta)$ This seems a little complicated... I know there is a simple replacement for cos(arcsin x) but I don't know anything that simply replaces sin(arcsin x). Am I on the wrong track here? Thanks again for the help. 7. Jan 26, 2013 ### tiny-tim hi steve233! sin(arcsin(x)) = x cos(arcsin(x)) = √(1-x2) 8. Jan 26, 2013 ### steve233 Ah yes... more evidence that I need to brush up on my trig :) I'll give this a shot and post the update soon. Thanks! Edit: Success! Thanks for all of the help TSny and tiny-tim. I'm a bit new to these forums, is there some way to "upvote"? Last edited: Jan 26, 2013 9. Jan 27, 2013 ### tiny-tim (just got up :zzz:) not needed … saying "thanks" is enough!
# How do we split SPSS dataset into 2 dataset to perform internal validation? [closed] I want to split my data set into two files, 50% of random cases in each file. I would like to use the first set as a training set and the second one for testing my prediction model. Following is an example: 10 cases (1,2,3,4,5,6,7,8,9,10), I want to split it into 2 files with 5 random cases in each file (1,3,6,7,9) and (2,4,5,8,10). How can I do this using Data>Select Cases option? I am able to generate a single set of 50% random cases but how do I test the remaining 50% of cases. SPSS gives an output of only 1 file ## closed as off-topic by Nick Cox, Sven Hohenstein, ttnphns, Sycorax, Peter Flom♦Feb 18 '16 at 19:33 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question appears to be off-topic because EITHER it is not about statistics, machine learning, data analysis, data mining, or data visualization, OR it focuses on programming, debugging, or performing routine operations within a statistical computing platform. If the latter, you could try the support links we maintain." – Nick Cox, Sven Hohenstein, ttnphns, Sycorax, Peter Flom If this question can be reworded to fit the rules in the help center, please edit the question. • This Q is both off-topic and also unclear (for example, may the two subsamples intersect by their case composition or not?). – ttnphns Feb 18 '16 at 19:00 DATASET ACTIVATE DataSet1.
# One-Sample t-Test [With R Code] The one sample t-test is very similar to the one sample z-test. A sample mean is being compared to a claimed population mean. The t-test is required when the population standard deviation is unknown. The t-test uses the sample’s standard deviation (not the population’s standard deviation) and the Student t-distribution as the sampling distribution to find a p-value. The t-Distribution While the z-test uses the normal distribution, which is only dependent on the mean and standard deviation of the population. Any of the various t-tests [one-sample, independent, dependent] use the t-distribution, which has an extra parameter over the normal distribution: degrees of freedom (df). The theoretical basis for degrees of freedom deserves a lot of attention, but for now for the one-sample t-test, df = n – 1. The distributions above show how the degrees of freedom affect the shape of t distribution. The gray distribution is the normal distribution. Low df causes the tails of the distribution to be fatter, while a higher df makes the t-distribution become more like the normal distribution. The practical outcome of this will be that samples with smaller n will need to be further from the population mean to reject the null hypothesis than samples with larger n. And compared to the z-test, the t-test will always need to have the sample mean further from the population mean. Just like the one-sample z-test, we have to define our null hypothesis and alternate hypothesis. This time I’m going to show a two-tailed test. The null hypothesis will be that there is NO difference between the sample mean and the population mean. The alternate hypothesis will test to see if the sample mean is significantly different from the population mean. The null and alternate hypotheses are written out as: • $latex H_0: \bar{x} = \mu&s=2$ • $latex H_A: \bar{x} \neq \mu&s=2$ The graphic above shows a t-distribution with a df = 5 with the critical regions highlighted. Since the shape of the distribution changes with degrees of freedom, the critical value for the t-test will change as well. The t-stat for this test is calculated the same way as the z-stat for the z-test, except for the σ term [population standard deviation] in the z-test is replaced with s [sample standard deviation]: $latex z = \frac{\bar{x} – \mu}{\sigma/\sqrt{n}} \hspace{1cm} t = \frac{\bar{x} – \mu}{s/\sqrt{n}} &s=2$ Like the z-stat, the higher the t-stat is the more certainty there is that the sample mean and the population mean are different. There are three things make the t-stat larger: • a bigger difference between sample mean and population mean • a small sample standard deviation • a larger sample size Example in R Since the one-sample t-test follows the same process as the z-test, I’ll simply show a case where you reject the null hypothesis. This will also be a two-tailed test, so we will use the null and alternate hypotheses found earlier on this page. Once again using the height and weight data set from UCLA’s site, I’ll create a tall-biased sample of 50 people for us to test. #reads data set height <- data$Height #N - number in population #n - number in sample N <- length(height) n <- 50 #population mean pop_mean <- mean(height) #tall-biased sample cut <- 1:25000 weights <- cut^.6 sorted_height <- sort(height) set.seed(123) height_sample_biased <- sample(sorted_height, size=n, prob=weights) This sample would represent something like athletes, CEOs, or maybe a meeting of tall people. After creating the sample, we use R's mean() and sd() functions to get the parameters for the t-stat formula from above. sample_mean <- mean(height_sample_biased) sample_sd <- sd(height_sample_biased) Now using the population mean, the sample mean, the sample standard deviation, and the number of samples (n = 50) we can calculate the t-stat. #t-stat t <- (sample_mean - pop_mean) / (sample_sd/sqrt(n)) Now you could look up the critical value for the t-test with 49 degrees of freedom [50-1 = 49], but this is R, so we can find the area under the tail of the curve [the blue area from the critical region diagram] and see if it's under 0.025. This will be our p-value, which is the probability that the value was achieved by random chance. #p-value for t-test 1-pt(t,n-1) The answer should be 0.006882297, which is well below 0.025, so the null hypothesis is rejected and the difference between the tall-biased sample and the general population is statistically significant. You can find the full R code including code to create the t-distribution and normal distribution data sets on my GitHub . # One Mean Z-test [with R code] I’ve included the full R code and the data set can be found on UCLA’s Stats Wiki Building on finding z-scores for individual measurement or values within a population, a z-test can determine if there is a statistically significance different between a sample mean and a population mean with a known population standard deviation. [Those conditions are essential for using this test.] The z-test uses z-scores and a normal distribution to determine the probability the sample mean is drawn randomly from a known population. If the test fails, the conclusion is that random sampling is likely to have produced this. If the test rejects the null hypothesis, then the sample is likely to be a result of non-random sampling [ie. like team captains picking the tallest kids for a basketball game in gym class]. The z-test relies critically on the central limit theorem, which basically states that if you take a n >= 30 sample a population [with any distribution] many times over, you’ll get a normal distribution of the sample means. [This needs it’s own post to explain fully, and there are interesting ways you can program R to illustrate this.] The sample mean distribution chart is shown below compared to the population distribution. The important concepts to notice here are: • the area of both distributions is equal to 1 • the sample mean distribution is a normal distribution • the sample mean distribution is tighter and taller than the population distribution For the rest of this post, the sample mean distribution will be used for the z-test and it is also represent in green opposed to blue. Also the data I use in this post is height data from this data set. It represents the heights of 25,000 children from Hong Kong. The data doesn’t reflect US adults, but it’s a great normally distributed data set. The goal of the z-test will be to test to see if a sample and its mean are randomly sampled from the population or if there’s some significant difference. For example, you could use this test to see if the average height of NBA players is statistically significantly different than the general population. While the NBA example is pretty common sense, not every problem will be that clear. Sample size [like in many hypothesis tests] is a huge factor. Small sample sizes require huge differences between the sample mean and the population mean to be significant. For a one-mean z-test, we will be using a one-tail hypothesis test. The null hypothesis will be that there is NO difference between the sample mean and the population mean. The alternate hypothesis will test to see if the sample mean is greater. The null and alternate hypotheses are written out as: •$latex H_0: \bar{x} = \mu&s=2$•$latex H_A: \bar{x} > \mu&s=2$The graph above shows the critical regions for a right-tailed z-test. The critical regions reflect areas where the z-stat has to fall in order for the test to reject the null hypothesis. The critical regions are defined because they represent a probability less the the stated confidence level. For example the critical region for 95% confidence level only has an area [probability] of 5%. If the sample mean is the same as the population mean, there’s a 5% chance it was drawn by random chance. This concept is the basis for almost every hypothesis test. The z-test uses the z-stat, which is calculated analogously to the z-score the difference being it uses standard error instead of standard deviation. These two concepts are similar; The standard deviation applies to the ‘spread’ of the blue population distribution, while the standard error applies to the ‘spread’ of the green sample mean distribution. The z-stat is calculated as:$latex z = \frac{\bar{x} – \mu}{\sigma/\sqrt{n}} &s=2$The higher the z-stat is the more certainty there is that the sample mean and the population are different. There are three things make the z-stat larger: • a bigger difference between sample mean and population mean • a small population standard deviation • a larger sample size Example I have two sets of sample from the data set: one is entirely random and the other I weighted heavily towards taller people. The null hypothesis would be that both there’s no difference between the sample mean and the population mean. The alternate would be that the sample mean is greater than the population mean. The weighted sample would be the sample if you were evaluating the mean height of a basketball team vs the general population. Here are the two sets of an n=50 sample and R code on how I constructed them using a set random seed of 123. Unbiased random sample Tall-biased random sample #unbiased random sample set.seed(123) n <- 50 height_sample <- sample(height, size=n) sample_mean <- mean(height_sample) #tall-biased sample cut <- 1:25000 weights <- cut^.6 sorted_height <- sort(height) set.seed(123) height_sample_biased <- sample(sorted_height, size=n, prob=weights) sample_mean_biased <- mean(height_sample_biased) The population mean is 67.993, the first unbiased sample is 68.099, and the tall-biased group is 68.593. Both samples are higher than the than the population mean, but are both significantly higher than the mean? To figure this out, we need to calculate the z-stats and find out if those z-stats fall in the critical region using the equation:$latex z = \frac{\bar{x} - \mu}{\sigma/\sqrt{n}} &s=2$We can substitute and calculate with the population standard deviation [σ] = 1.902:$latex z_{unbiased} = \frac{68.593 - 67.993}{1.902/\sqrt{50}} = 0.3922 \ \ \ \ z_{tall-biased} = \frac{68.099 - 67.993}{1.902/\sqrt{50}} = 2.229 &s=0$#random unbiased sample #z-stat calculation sample_mean z <- (sample_mean - pop_mean)/(pop_sd/sqrt(n)) #tall-biased sample z <- (sample_mean_biased - pop_mean)/(pop_sd/sqrt(n)) Quickly, knowing that the critical value for a one-tail z-test at 95% confidence is 1.645, we can determine the unbiased random sample is not significantly different, but the tall-biased sample is significantly different. This is because the z-stat for the unbiased sample is less than the critical value, while the tall-biased is higher than the critical value. Plotting the z-test for the unbiased sample, the area [probability] to the right of the z-stat is much higher than the accepted 5%. The larger the green area is the more likely the difference between the sample mean and the population mean were obtained by random chance. To get a z-test to be significant, you want to get the z-stat high so that the area [probability] is low. [In practice, this can be done by increasing sample size.] The tall-baised sample mean's z-stat creates a plot with much less area to the right of the z-stat, so these results were much less likely to be obtained by chance. The p-values can be obtained by calculating the area to right of the z-stat. The R code below summarizes how to do that using R's 'pnorm' function. #calculating the p-value p_yellow2 <- pnorm(z) p_green2 <- 1 - p_yellow2 p_green2 The p-value for the unbiased sample is .3474 or there's a 34.74% chance that the result was obtained due to random chance, while the tall-biased sample only have a p-value of .01291 or a 1.291% chance being a result of random chance. Since the p-value tall-biased sample is less than the .05, the null hypothesis is rejected, but the since the unbiased sample's p-value is well above .05, the null hypothesis is retained. What the one-mean z-test accomplished was telling us that a simple random sample from a population wasn't really that different from population, while a sample that wasn't completely random but was much taller than the overall population was shown to be different. While this test isn't used often, the principles of distributions, calculating test stats, and p-values have many applications with in the statistics universe. # Calculating Z-Scores [with R code] I’ve included the full R code and the data set can be found on UCLA’s Stats Wiki Normal distributions are convenient because they can be scaled to any mean or standard deviation meaning you can use the exact same distribution for weight, height, blood pressure, white-noise errors, etc. Obviously, the means and standard deviations of these measurements should all be completely different. In order to get the distributions standardized, the measurements can be changed into z-scores. Z-scores are a stand-in for the actual measurement, and they represent the distance of a value from the mean measured in standard deviations. So a z-score of 2.0 means the measurement is 2 standard deviations away from the mean. To demonstrate how this is calculated and used, I found a height and weight data set on UCLA’s site. They have height measurements from children from Hong Kong. Unfortunately, the site doesn’t give much detail about the data, but it is an excellent example of normal distribution as you can see in the graph below. The red line represents the theoretical normal distribution, while the blue area chart reflects a kernel density estimation of the data set obtained from UCLA. The data set doesn’t deviate much from the theoretical distribution. The z-scores are also listed on this normal distribution to show how the actual measurements of height correspond to the z-scores, since the z-scores are simple arithmetic transformations of the actual measurements. The first step to find the z-score is to find the population mean and standard deviation. It should be noted that the sd function in R uses the sample standard deviation and not the population standard deviation, though with 25,000 samples the different is rather small. #DATA LOAD data <- read.csv('Height_data.csv') height <- data$Height hist(height) #histogram #POPULATION PARAMETER CALCULATIONS pop_sd <- sd(height)*sqrt((length(height)-1)/(length(height))) pop_mean <- mean(height) Using just the population mean [μ = 67.99] and standard deviation [σ = 1.90], you can calculate the z-score for any given value of x. In this example I'll use 72 for x. $latex z = \frac{x - \mu}{\sigma} &s=2$ z <- (72 - pop_mean) / pop_sd This gives you a z-score of 2.107. To put this tool to use, let's use the z-score to find the probability of finding someone who is 72 inches [6-foot] tall. [Remember this data set doesn't apply to adults in the US, so these results might conflict with everyday experience.] The z-score will be used to determine the area [probability] underneath the distribution curve past the z-score value that we are interested in. [One note is that you have to specify a range (72 to infinity) and not a single value (72). If you wanted to find people who are exactly 6-foot, not taller than 6-foot, you would have to specify the range of 71.5 to 72.5 inches. This is another problem, but this has everything to do with definite integrals intervals if you are familiar with Calc I.] The above graph shows the area we intend to calculate. The blue area is our target, since it represents the probability of finding someone taller than 6-foot. The yellow area represents the rest of the population or everyone is is under 6-feet tall. The z-score and actual height measurements are both given underscoring the relationship between the two. Typically in an introductory stats class, you'd use the z-score and look it up in a table and find the probability that way. R has a function 'pnorm' which will give you a more precise answer than a table in a book. ['pnorm' stands for "probability normal distribution".] Both R and typical z-score tables will return the area under the curve from -infinity to value on the graph this is represented by the yellow area. In this particular problem, we want to find the blue area. The solution to this is an easy arithmetic function. The area under the curve is 1, so by subtracting the yellow area from 1 will give you the area [probability] for the blue area. Yellow Area: p_yellow1 <- pnorm(72, pop_mean, pop_sd) #using x, mu, and sigma p_yellow2 <- pnorm(z) #using z-score of 2.107 Blue Area [TARGET]: p_blue1 <- 1 - p_yellow1 #using x, mu, and sigma p_blue2 <- 1 - p_yellow2 #using z-score of 2.107 Both of these techniques in R will yield the same answer of 1.76%. I used both methods, to show that R has some versatility that traditional statistics tables don't have. I personally find statistics tables antiquated, since we have better ways to determine it, and the table doesn't help provide any insight over software solutions. Z-scores are useful when relating different measurement distributions to each acting as a 'common denominator'. The z-scores are used extensively for determining area underneath the curve when using text book tables, and also can be easily used in programs such as R. Some statistical hypothesis tests are based on z-scores and the basic principles of finding the area beyond some value. # Pirates — Pitch Count At the Pirates, we like to try to guess what the pitch count will be for the Pirates’ starting pitcher. In honor of opening day today, I present a cheat sheet! The graphs might be a little bit overkill, but it’s cool all the different ways you can visualize the this simple data. The number of pitches is distributed normally with a skew left. This skew occurs because there are instances when the pitcher has a bad day and gets pulled really early. To account for this, I excluded any outing that didn’t have more than 50 pitches. We will consider these as rare events, which we shouldn’t try to use in our prediction. The idea of the game is to hit the exact pitch count, and this would preclude a rare event from being factored in. I also used the median number of pitches instead of the average number of pitches for the same reason. We want to consistently pick numbers which are the most likely to get hit, not to try to predict every game. The idea of using the median over the mean is important when there is a skew to the normal distribution of the data. This is important for something like income. There is a huge skew for incomes across the entire US population since there are so few people that make outrageous amounts of money. The mean of incomes will be much higher than the median of incomes. The median will be much more representative of the central tendency of the data. So applying this to the pitch count, the short outings are rare and the mean doesn’t represent the most probable outcome for the day. Happy Opening Day!
# What separable $\rho$ only admit separable pure decompositions with more than $\mathrm{rank}(\rho)$ terms? As shown e.g. in Watrous' book (Proposition 6.6, page 314), a separable state $$\rho$$ can always be written as a convex combination of at most $$\mathrm{rank}(\rho)^2$$ pure, separable states. More precisely, using the notation in the book, any separable state $$\xi\in\mathcal X\otimes\mathcal Y$$ can be decomposed as $$\xi = \sum_{a\in\Sigma} p(a) \, x_a x_a^*\otimes y_a y_a^*,\tag1$$ for some probability distribution $$p$$, sets of pure states $$\{x_a: a\in\Sigma\}\subset\mathcal X$$ and $$\{y_a: a\in\Sigma\}\subset\mathcal Y$$, and alphabet $$\Sigma$$ with $$\lvert\Sigma\rvert\le \mathrm{rank}(\xi)^2$$. This is shown by observing that $$\xi$$ is an element of the real affine space of hermitian operators $$H\in\mathrm{Herm}(\mathcal X\otimes\mathcal Y)$$ such that $$\mathrm{im}(H)\subseteq\mathrm{im}(\xi)$$ and $$\mathrm{Tr}(H)=1$$. This space has dimension $$\mathrm{rank}(\xi)^2-1$$, and thus from Carathéodory we get the conclusion. Consider the case of the totally mixed state in a space $$\mathcal X\otimes\mathcal Y$$ with $$\mathrm{dim}(\mathcal X)=d, \mathrm{dim}(\mathcal Y)=d'$$. For this state, $$\xi\equiv \frac{1}{dd'}I = \frac{I}{d}\otimes\frac{I}{d'}$$, we have $$\mathrm{rank}(\xi)=\lvert\Sigma\rvert=dd'$$ for the standard choice of decomposition. Generating random convex combinations of product states I also always find $$\lvert\Sigma\rvert=\mathrm{rank}(\xi)$$ (albeit, clearly, the numerics doesn't check for the existence of an alternative decomposition with less than $${\rm rank}(\xi)$$ components). In the case $$\lvert\Sigma\rvert=1$$, it is trivial to see that we must also always have $$\lvert\Sigma\rvert=\mathrm{rank}(\rho)$$. What are examples in which this is not the case? More precisely, what are examples of states for which there is no alphabet $$\Sigma$$ with $$\lvert\Sigma\rvert\le\mathrm{rank}(\xi)$$, such that $$\xi=\sum_{a\in\Sigma}p(a)x_a x_a^*\otimes y_a y_a^*$$? The following is the Mathematica snippet I used to generate random convex combinations of product states: RandomUnitary[m_] := Orthogonalize[ Map[#[[1]] + I #[[2]]&, #, {2}]& @ RandomReal[ NormalDistribution[0, 1], {m, m, 2} ] ]; randomPureDM[dim_] := First@RandomUnitary@dim // KroneckerProduct[#, Conjugate@#] &; With[{numComponents = 4, bigDim = 10}, With[{ mats = Table[KroneckerProduct[randomPureDM@bigDim, randomPureDM@bigDim], numComponents], probs = RandomReal[{0, 1}, numComponents] // #/Total@# & }, Total[probs*mats] // Eigenvalues // Chop ] ] A related question on physics.SE is What is the minimum number of separable pure states needed to decompose arbitrary separable states?. • In the title, "less" means "less or equal"? Jul 23 '20 at 16:52 • @NorbertSchuch indeed. Should be phrased better now – glS Jul 23 '20 at 16:58 • Definitely, much better without the negation. Interesting question, by the way. Jul 23 '20 at 17:01 Symmetric Werner states in any dimension $$n\geq 2$$ provide examples. Let's take $$n=2$$ as an example for simplicity. Define $$\rho\in\mathrm{D}(\mathbb{C}^2\otimes\mathbb{C}^2)$$ as $$\rho = \frac{1}{6}\, \begin{pmatrix} 2 & 0 & 0 & 0\\ 0 & 1 & 1 & 0\\ 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 2 \end{pmatrix},$$ which is proportional to the projection onto the symmetric subspace of $$\mathbb{C}^2\otimes\mathbb{C}^2$$. The projection onto the symmetric subspace is always separable, but here you can see it easily by applying the PPT test. The rank of $$\rho$$ is 3. It is possible to write $$\rho$$ as $$\rho = \frac{1}{4}\sum_{k = 1}^4 u_k u_k^{\ast} \otimes u_k u_k^{\ast}$$ by taking $$u_1,\ldots,u_4$$ to be the four tetrahedral states, or any other four states that form a SIC (symmetric information-complete measurement) in $$\mathbb{C}^2$$. It is, however, not possible to express $$\rho$$ as $$\rho = \sum_{k = 1}^3 p_k x_k x_k^{\ast} \otimes y_k y_k^{\ast}$$ for any choice of unit vectors $$x_1,x_2,x_3,y_1,y_2,y_3\in\mathbb{C}^2$$ and probabilities $$p_1, p_2, p_3$$. To see why, let us assume toward contradiction that such an expression does exist. Observe first that because the image of $$\rho$$ is the symmetric subspace, the vectors $$x_k$$ and $$y_k$$ must be scalar multiples of one another for each $$k$$, so there is no loss of generality in assuming $$y_k = x_k$$. Next we will use the fact that if $$\Pi$$ is any rank $$r$$ projection operator and $$z_1,\ldots,z_r$$ are vectors satisfying $$\Pi = z_1 z_1^{\ast} + \cdots + z_r z_r^{\ast},$$ then it must be that $$z_1,\ldots,z_r$$ are orthogonal unit vectors. Using the fact that $$3\rho$$ is a projection operator, we conclude that $$p_1 = p_2 = p_3 = 1/3$$ and $$x_1\otimes x_1$$, $$x_2\otimes x_2$$, $$x_3\otimes x_3$$ are orthogonal. This implies that $$x_1$$, $$x_2$$, $$x_3$$ are orthogonal. This, however, contradicts the fact that these vectors are drawn from a space of dimension 2, so we have a contradiction and we're done. More generally, the symmetric Werner state $$\rho\in\mathrm{D}(\mathbb{C}^n\otimes\mathbb{C}^n)$$ is always separable and has rank $$\binom{n+1}{2}$$ but cannot be written as a convex combination of fewer than $$n^2$$ rank one separable states (and that is only possible when there exists a SIC in dimension $$n$$). This fact is proved in a paper by Andrew Scott [arXiv:quant-ph/0604049]. • Ah, I should have known - the symmetric or the antisymmetric projector are always an example! Jul 24 '20 at 15:52 • One can also use Example 6.10 of the mentioned chapter of your book and notice that the partial transpose of the symmetric state is an isotropic state with full rank $n^2$. So, it can't have less than $n^2$ elements in the separable pure decomposition. Thus the same is true for the symmetric state. Jul 15 at 8:12 • Very nice observation! That's a much easier way to argue it. Jul 15 at 13:00
Enter the content which will be displayed in sticky bar # Abstract Aether, the Mother of All Forces in Nature - Nuclear Forces (Paper III of IV) Cameron Y. Rebigsol Year: 2018 Pages: 7 Keywords: Aether ocean, fluid, intrinsic pressure, strong force, weak force, binding energy, photon, negoff, posoff Gravitational force and the nuclear strong force are established based on the same nature of the Aether fluid. However, because of the scales of distance involved in the establishment of these two forces are so incredibly different, some details that can be extremely trivial in the gravitational force becomes gravely important in bringing up the nuclear strong force. Upon recovering those "extremely trivial" details in the exploration of the strong force, we should find that the Aether's intrinsic pressure should be necessarily revised with a much higher magnitude, in the order of $\times 10^{14}Newton / cm^2$. This difference thus further brings in the contrasting difference in magnitude between these two forces. Since both of the forces are actually of the same nature, we categorizes them with one new term: condensive force. Although the nuclear strong force never exists without the presence of the Aether's intrinsic pressure, it is a short range force, acting in distance in the order of a nucleus. In such range, the Aether fluid pressure is escalated many folds above its normal intrinsic pressure. This creates a situation in which the strong force also competes with the Aether's intrinsic pressure. Their competition results in the so called weak force. If the acceptance of Aether would lead to a reasonable explanation of the condensive force and electromagnetic force, coupled with the finding that the theory of relativity is mathematically self-refuted $$_{[1]}$$, the science world may find itself difficult to resign from the task to re-evaluate the validity of the Quantum Theory. At least, as the article develops, the concept of photon is found to be self-refuted, and so is Bohr's model regarding atomic structure.
# vectorToMonomial -- Converts an exponent vector into a monomial. ## Synopsis • Usage: vectorToMonomial(v,R) • Inputs: • v, , • R, , • Outputs: • , ## Description Converts the exponent vector v into a monomial in the variables of R. The number of variables of R has to match the length of v. i1 : R=QQ[x_0..x_4] o1 = R o1 : PolynomialRing i2 : m=vector {1,2,1,0,0} o2 = | 1 | | 2 | | 1 | | 0 | | 0 | 5 o2 : ZZ i3 : vectorToMonomial(m,R) 2 o3 = x x x 0 1 2 o3 : R
# place footnote under table above caption I would like to place a footnote under a table and above the caption. Since I am using the floatrow package I went with \floatfoot, however there is no option to position floatfoot below the table and above the caption. MWE: \documentclass[12pt]{article} \usepackage{floatrow} \usepackage{caption} \floatsetup[table]{footposition=bottom} \captionsetup[table]{labelfont=bf} \begin{document} \begin{table} \begin{tabular}{lr} Yellow & 3 \\ Green & $k-3$ \\ \end{tabular} \caption{Apples} \floatfoot{$k$ number of apples} \end{table} \end{document} - You could use \RawCaption: \documentclass[12pt]{article} \usepackage{floatrow} \usepackage{caption} \captionsetup[table]{labelfont=bf} \begin{document} \begin{table} \begin{tabular}{lr} Yellow & 3 \\ Green & $k-3$ \\ \end{tabular} \RawCaption{\caption*{\footnotesize$k$ number of apples}} \caption{Apples} \end{table} \end{document} Or even a simple text line with the desired formatting: \documentclass[12pt]{article} \usepackage{floatrow} \usepackage{caption} \captionsetup[table]{labelfont=bf} \begin{document} \begin{table} \begin{tabular}{lr} Yellow & 3 \\ Green & $k-3$ \\ \end{tabular}\par\medskip {\footnotesize$k$ number of apples} \caption{Apples} \end{table} \end{document} - Can you tell me how I control the vertical padding of the rawcaption, without affecting the padding of normal captions? – ted Jul 12 '13 at 21:42 The package threeparttable provides tablenotes and should probably do what you want. \documentclass[12pt]{article} \usepackage{floatrow} \usepackage{caption} \usepackage{threeparttable} \floatsetup[table]{footposition=bottom} \captionsetup[table]{labelfont=bf} \begin{document} \begin{table} \begin{threeparttable}[b] \caption{Apples} \begin{tabular}{lr} Yellow & 3 \\ Green & $k-3$ \\ \end{tabular} \begin{tablenotes}{\footnotesize \item [] $k$ number of apples } \end{tablenotes} \end{threeparttable} \end{table} \end{document} Unfortunately the width of the tablenote is limited to the tablewidth. Hence I recommend defining a fixed tablewidth with tabular* like this: \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}lr} ... \end{tabular*} -
### Home > INT2 > Chapter 6 > Lesson 6.1.4 > Problem6-44 6-44. Calculate the area and perimeter of $ΔABC$ at right. Give both exact and approximate (decimal) answers. Use your knowledge of special right triangles to find the lengths of the sides. Area $≈ 21.86$ units$^²$   Perimeter $≈ 24.59$ units
Disjoint union explained Disjoint union should not be confused with Disjunctive union. (Ai:i\inI) is a set A, often denoted by $\bigsqcup_ A_i,$ with an injection of each Ai into A, such that the images of these injections form a partition of A (that is, each element of A belongs to exactly one of these images). The disjoint union of a family of pairwise disjoint sets is their union. In terms of category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. A standard way for building the disjoint union is to define A as the set of ordered pairs (x,i) such that x\inAi, and the injection Ai\toA as x\mapsto(x,i). Example Consider the sets A0=\{5,6,7\} and A1=\{5,6\}. It is possible to index the set elements according to set origin by forming the associated sets$\beginA^*_0 & = \ \\A^*_1 & = \, \\\end$where the second element in each pair matches the subscript of the origin set (for example, the 0 in (5,0) matches the subscript in A0, etc.). The disjoint union A0\sqcupA1 can then be calculated as follows:$A_0 \sqcup A_1 = A^*_0 \cup A^*_1 = \.$ Set theory definition Formally, let \left\{Ai:i\inI\right\} be a family of sets indexed by I. The disjoint union of this family is the set$\bigsqcup_ A_i = \bigcup_ \left\.$ The elements of the disjoint union are ordered pairs (x,i). Here i serves as an auxiliary index that indicates which Ai the element x came from. Each of the sets Ai is canonically isomorphic to the set$A_i^* = \left\.$Through this isomorphism, one may consider that Ai is canonically embedded in the disjoint union. For ij, the sets * A i and * A j are disjoint even if the sets Ai and Aj are not. In the extreme case where each of the Ai is equal to some fixed set A for each i\inI, the disjoint union is the Cartesian product of A and I :$\bigsqcup_ A_i = A \times I.$ Occasionally, the notation$\sum_ A_i$is used for the disjoint union of a family of sets, or the notation A+B for the disjoint union of two sets. This notation is meant to be suggestive of the fact that the cardinality of the disjoint union is the sum of the cardinalities of the terms in the family. Compare this to the notation for the Cartesian product of a family of sets. Disjoint unions are also sometimes written uplusiAi or ⋅ cupiAi. In the language of category theory, the disjoint union is the coproduct in the category of sets. It therefore satisfies the associated universal property. This also means that the disjoint union is the categorical dual of the Cartesian product construction. See coproduct for more details. For many purposes, the particular choice of auxiliary index is unimportant, and in a simplifying abuse of notation, the indexed family can be treated simply as a collection of sets. In this case * A i is referred to as a of Ai and the notation \underset{A\inC}{cup\nolimits*}A is sometimes used. Category theory point of view In category theory the disjoint union is defined as a coproduct in the category of sets. As such, the disjoint union is defined up to an isomorphism, and the above definition is just one realization of the coproduct, among others. When the sets are pairwise disjoint, the usual union is another realization of the coproduct. This justifies the second definition in the lead. This categorical aspect of the disjoint union explains why \coprod
# Check if an object is a try-error or is a vector containing at least one try-error member. Check if an object is a try-error or is a vector containing at least one try-error member. hasError(object, FLAGrecursive = TRUE) ## Arguments object an R object. logical (default TRUE) relevant only if object is a vector. If TRUE and object is a vector, the first-level members of object are checked. Logical.
Search: ECD / Glossary A Activity Selection Process? The Activity Selection Process is the part of the Assessment Cycle that selects a task or other activity for presentation to an examinee. Acyclic directed graph A directed graph that has no directed cycles. Note that if the directions of the arrows are dropped there may be cycles. The Administrator is the person responsible for setting up and maintaining the assessment. The Administrator is responsible for starting the process and configuring various choices; for example, whether or not item level feedback will be displayed during the assessment. Assembly Model? The Assembly Model, one of a collection of six different types of models that comprise the Conceptual Assessment Framework? (CAF?), provides the information required to control the selection of tasks for the creation of an assessment. Assessment? An Assessment is a system (computer, manual, or some combination of the these) that presents examinees, or participants, with work and evaluates the results. This includes high stakes examinations, diagnostic tests, and coached-practice systems, which include embedded assessment. Assessment Cycle The Assessment Cycle is comprised of four basic processes: Activity Selection, Presentation, Response Processing, and Summary Scoring. The Activity Selection Process selects a task or other activity for presentation to an examinee. The Presentation Process displays the task to the examinee and captures the results (or Work Products) when the examinee performs the task. Response Processing identifies the essential features of the response and records these as a series of Observations. The Summary Scoring Process updates the scoring based on the input it receives from Response Processing. This four-process architecture can work in either synchronous or asynchronous mode. Assessment Designer A person who is responsible for the building and/or maintaining the Conceptual Assessment Framework for an assessment. B Bayesian network? A Bayesian network (or Bayes net) is a method of representing a probability distribution with an acyclic directed graph. The nodes of the graph represent variables in the problem and the pattern of edges represent conditional independence relationships (see d-separation). The variables in a Bayesian network are generally required to be discrete. Bayesian networks are a special case of graphical models. Beta distribution The beta distribution is a continuous probability distribution with the following probability density function: $f(\theta|a,b) = \left[{1\over B(a,b)} \right] \theta^{a-1}(1-\theta)^{b-1}$, where $B(a,b)$ is the "beta function", $B(a,b) = \int_0^1 t^{a-1}(1-t)^{b-1}\,dt = {\Gamma(a)\Gamma(b)\over \Gamma(a+b)}$ . The beta distribution is interesting because it is a natural conjugate of the binomial distribution. In particular, if $\theta$ is interpreted as the probability of 'success', then the parameter $a$ corresponds to the number of observed successes and the parameter $b$ corresponds to the number of observed failures. The Dirichlet distribution is a generalization of the beta distribution. $B(1,1)$ is the uniform distribution. Binomial (Bernoulli) distribution A single event or trial which can 'succeed' with a probability $\theta$ is said to follow a Bernoulli distribution. If the experiment is repeated for $n$ independent trials, then the count of the number of 'successes' is said to follow a binomial distribution. The probability mass function is: $p(Y=y|\theta,n) = {n\choose y}\theta^y(1-\theta)^{n-y}$ for $y=0,\ldots,n$. The multinomial distribution is a generalization of the binomial where each trial has more than two possible outcomes. C Calibration Choosing a set of parameters for a measurement model? based on data collected from that model. In Bayesian terms this means calculating a posterior law for parameters based on the prior law and the collected data. Child In a Graph?, the child is the node that is the second position in the edge, that is, that the arrow points from the parent node to the child node. In a Bayes net, a child node has a conditional probability distribution that depends on one or more other nodes, which are its parents. The descendents of a node $A$ is any node that is a child, a child of a child, a child of a child of a child, or so forth. Claim A claim is a proposition, educationally relevant and stated in natural language, about the kinds of things a student might know or be able to do, in what kinds of circumstances. Claims are what users of assessments want to be able to say about examinees, and are the basis of Score Reports. A Reporting Rule maps information from probability distributions over Student Model Variables to summary statements about the amount and direction of evidence to support a claim. Clique A maximally connected set of nodes in a graph. This means that all nodes in the clique are neighbors and there is no other node in the graph which is a neighbor of all of the nodes in the clique. The cliques of a graphical model are important because they define the spaces over which the computations take place. The cliques of the graph are often arranged into a tree of cliques or junction tree. The treewidth of a graph is the size of its largest clique and generally the dominant term in the cost of probability calculations within that graph. Compensatory Distribution A design pattern for a conditional probability table with multiple parents (usually representing proficiencies) where more of one skill can compensate for less of another. For example, when driving a car, planning ahead can compensate for reaction time. Competency Model An alternative term used for proficiency model because the term "proficiency" has taken on a special meaning under the No Child Left Behind Act. An assessment in which the form of the assessment, that is, the sequence of tasks seen by the examinee, is dynamically created by a computer at the time of the assessment, usually each task is based on observed outcomes from previous tasks. This is often contrasted with a linear test, in which each form \ has a fixed sequence of tasks. See Invalid BibTex Entry! Conditional Independence Two events A and B are conditionally independent given a third event X if when the state of X is known, knowledge of A provides no information about B. This can be stated in terms of probabilities as $\Pr(A|X,B) = \Pr(A|X,\neg B) = \Pr(A|X)$. When two events are conditionally independent their joint conditional probabilities can be calculated by the product of the conditional probabilities: $\Pr(A,B|X) = \Pr (A|X) \Pr(B|X)$. Conditional Multinomial Distribution This is the natural distribution for a conditional probability table in a Bayesian network where every row of the conditional probability table, corresponding to a configuration of the parent variables, is an independent multinomial distribution. The natural conjugate distribution for the parameters of this distribution is the hyper-Dirichlet law. Conceptual Assessment Framework (CAF) The Conceptual Assessment Framework builds specific models for use in a particular assessment product (taking into account the specific purposes and requirements of that product). The conceptual assessment framework consists of a collection of six different types of models that define what objects are needed and how an assessment will function for a particular purpose. The models of the CAF are as follows: the Student Model, the Task Model, the Evidence Model, the Assembly Model, the Presentation Model, and the Delivery Model. Delivery Model. The Delivery Model, one of a collection of six different types of models that comprise the Conceptual Assessment Framework (CAF), describes which other models will be used, as well as other properties of the assessment that span all four processes, such as platform and security requirements. Conjunctive Distribution A design pattern for a conditional probability table with multiple parents (usually representing proficiencies) where all skills are necessary in order to solve the problem. For example, both reading and writing skills are necessary for a book report task. This is sometimes called a noisy-and distribution. D D-Separation A set of rules for when a observing a set of variables $S$ makes two other variables (or sets of variables) $A$ and $B$ independent Invalid BibTex Entry!. This provides the rules for reading conditional independence relationships from a directed graph. For $A$ and $B$ to be independent, there must be at least one variable in $S$ along every path from $A$ to $B$ or $B$ to $A$, as well as at least one variable of $S$ along the path from any common ancestor to $A$ and $B$. Additionally, no common descendant of $A$ and $B$ may appear in $S$. DAG An abbreviation for Directed Acyclic Graph, a term often used in place of acyclic directed graph. Technically incorrect (a directed acyclic graph would be a tree which is directed), but has a better abbreviation. Demographic Variable A variable, often appearing in a proficiency model, that is known and provides background information about an examinee. Examples include Race, Gender, Number of previous Algebra courses taken, Prefers large print test forms. DiBello--Samejima Distribution A conditional probability table in a Bayesian network that is constructed using a procedure first described by Lou DiBello that employs Samejima's graded response model to generate the conditional probabilities. Under this procedure, first each state of each parent variable is assigned a real number value called its effective theta. These are combined using a structure function that reflects the domain experts view of how the skills interact in a particular task (popular choices are conjunctive, disjunctive, compensatory, and inhibitor), to produce an effective theta value for each parent combination. The resulting value is used as the ability parameter in Samejima's graded response model to produce the conditional probability distribution for that row. Invalid BibTex Entry!Invalid BibTex Entry! One way in which an assessment needs to be fair, is that it should have the same measurement properties for different subgroups (often defined by demographic variables). In the language of Bayes nets, the outcome variables from any task should be conditionally independent from group membership given the proficiency variables. Any task in which the observables are not conditionally independent from group membership is said to exhibit DIF for that group. Traditionally, testing for DIF has been very important when the group membership variable indicates race or gender, however, DIF based on language group is also well studied, especially in the context of multilingual tests. Difficulty A parameter for a conditional probability distribution that controls what level of skill is necessary to achieve a high probability of getting an observed outcome in a higher state. In item response theory (IRT) models, the difficulty parameter is often subtracted from the proficiency variable and expressed on the same scale as the proficiency variables. In conditional probability distributions, the difficulty is often expressed as an intercept parameter with a negative sign. In general higher difficulty, means that fewer members of the population will get the item right. Direct Evidence If a proficiency variable is a parent of an observed outcome variable, then that variable provides "direct evidence" for the proficiency. If the proficiency variable is connected, but not a parent, then the observed outcome may still provide indirect evidence for the proficiency. Directed Graph (Digraph) A Graph? whose edges are considered to be ordered; therefore, in a directed graph $(A,B)$ and $(B,A)$ are considered to be distinct edges, where in an undirected or simple graph they would be considered the same. The edges of a directed graph are often depicted with an arrow going from the parent node to the child node. The terms parent and child can be extended using the obvious analogy: an node $A$ is an ancestor of $B$ if there is a directed path from $A$ to $B$ and $B$ is then a descendent of $A$. Directed graphs are important in the construction of Bayesian networks as the variables of the model are represented by nodes in the graph and the probability distribution for any variable is given conditional on its parent in the graph. Directed Hypergraph A Graph? that instead of having edges consisting of pairs of nodes, has directed hyperedges that consist of a partitioned set of nodes, divided into parents and children. Directed hypergraphs are often drawn with nodes as circles and hyperedges as squares, with tentacles connecting the nodes and the hyperedges (drawn as an arrow two the hyperedge if the node is in the parent partition, and as a arrow towards the node if the node is in the child partition). Directed hypergraphs are useful representing Bayesian networks when emphasizing the factorization structure, as the hyperedges correspond to the component factors of the joint probability distribution (each one represents a conditional probability distribution from its parents to its children). In this representation, the glyph used to indicate the hyperedge can be annotated with the type of distribution (e.g., conjunctive, compensatory). Dirichlet Law A generalization of the beta distribution that is the natural conjugate law for the parameter of a categorical or multinomial distribution. The random vector consists of $K$ values between zero and one, $\{\theta_1,\ldots,\theta_K\}$, with the restriction that $\sum_{k=1}^K \theta_k =1$. The Dirichlet distribution then has the following density function: $f(\theta_1,\ldots,\theta_K | \alpha_1,\ldots,\alpha_K) = C \prod_{k=1}^K \theta_k^{\alpha_k-1}$, where $C = \Gamma(\sum_{k=1}^K \alpha_k) / \prod_{k=1}^K \Gamma(\alpha_k)$, is a normalizing constant. The parameter $\alpha_1,\ldots,\alpha_K$ is a collection of positive numbers which can be interpreted as a series of pseudo counts of observed cases. Discrimination A parameter which determines the difference in probability for an observable for people for which a claim holds those for which the claim does not hold. In IRT models (and pseudo-IRT models like the DiBello--Samejima models) the discrimination parameter is a slope; i.e., in {$P(X) = \textrm{logit}^{-1} \left ( a(\theta-b) \right)$}, the parameter $a$ is the discrimination parameter. Disjunctive Distribution A design pattern for a conditional probability table with multiple parents (usually representing proficiencies) where any one of the skills are necessary in order to solve the problem. For example, a mathematics problem with two different solution strategies would be modeled with a disjunctive distribution. This is sometimes called a noisy-or distribution. E Edge A pair of nodes in a Graph? that are considered to be joined in some way. Note that in a directed graph the edge is ordered so that $(A,B)$ and $(B,A)$ are distinct edges, but in an undirected or simple graph they are considered to be the same. In a hypergraph, hyperedges are generalized edges that may contain more one, two, or more nodes. Elicitation The process of asking questions (usually of an expert) to obtain the parameters or hyperparameters of a model. In constructing a Bayesian network, the elicitation usually takes place in several steps. First the analysts elicit the graphical structure of the model. Next, they elicit a design pattern or distribution type for each conditional probability table and prior laws for the parameters of those distributions. Finally, the experts must specify hyperparameters for all of the prior laws. Equating Equating in the process of creating a function that maps a pattern of observed outcomes to a score such that the score obtained from two different forms of an assessment are equivalent. The idea, coming from the world of high-stakes testing, is that there should be no inherent advantage in receiving Form A or Form B. For equating to be meaningful, it must be reasonable that the two forms are equivalent, i.e., they are built to the same specifications. When comparable scores are needed from non-equivalent forms, then the weaker term, linking is used instead. Expected A Posteriori (EAP) Score A score that is obtained by taking the expected value (mean) of the posterior distribution over score profiles. Consider a function $h({\bf s}$ that maps a proficiency profile ${\bf s}$ to a numeric value. For example, it might map to the integers 0, 1 or 2 based on whether or not a given proficiency variable is low, medium, or high. The EAP score is the expected value of $h({\bf s})$ with respect to the posterior distribution (after observing evidence). Evaluation Rules Evaluation Rules are a type of Evidence Rules that set the values of Observable Variables. Evidence In educational assessment, Evidence is information or observations that allow inferences to be made about aspects of an examinee's proficiency (which are unobservable) from evaluations of observable behaviors in given performance situations. Evidence Accumulation Process In the Four-Process Architecture this is the process that is responsible for compiling evidence across multiple tasks to draw inferences about student proficiency. Usually, the evidence accumulation process maintains some kind of scoring record which records the current best information about the student proficiency. The evidence accumulation process performs two critical roles: (1) it accepts the observable outcomes from the Evidence Identification Process and uses them to update the scoring record, and (2) it calculates information about how much evidence any potential task might yield for the Activity Selection Process (if the assessment is adaptive). Evidence-Centered Assessment Design (ECD) Evidence-Centered Assessment Design (ECD) is a methodology for designing assessments that underscores the central role of evidentiary reasoning in assessment design. ECD is based on three premises: (1) An assessment must build around the important knowledge in the domain of interest, and an understanding of how that knowledge is acquired and put to use; (2) the chain of reasoning from what participants say and do in assessments to inferences about what they know, can do, or should do next, must be based on the principles of evidentiary reasoning; (3) Purpose must be the driving force behind design decisions, which reflect constraints, resources, and conditions of use. Evidence Identification Process This is the part of the Four-Process Architecture that is responsible for processing the raw work product from a task and setting the values of the observable outcome variables. This could be as simple as matching the observed response to the key in a multiple-choice item or as complex as identifying key features in the output from a simulator. It could also be a human process, such as a rater assigning a holistic or trait scores to an essay. Evidence Model The Evidence Model is a set of instructions for interpreting the output of a specific task. It is the bridge between the Task Model, which describes the task, and the Student Model, which describes the framework for expressing what is known about the examinee's state of knowledge. The Evidence Model generally has two parts: (1) A series of Evidence Rules which describe how to identify and characterize essential features of the Work Product; (2) A Statistical Model that tells how the scoring should be updated given the observed features of the response. Evidence Rules Evidence Rules are the rubrics, algorithms, assignment functions, or other methods for evaluating the response (Work Product). They specify how values are assigned to Observable Variables, and thereby identify those pieces of evidence that can be gleaned from a given response (Work Product). Evidence Rule Data Evidence Rule Data is data found within the Response Processing. It often takes the form of logical rules. Examinee. See Participant Examinee Record The Examinee Record is a record of tasks to which the participant is exposed, as well as the participant's Work Products, Observables, and Scoring Record. Expected Weight of Evidence This is a measure of how much information a task provides for a particular hypothesis or claim. Consider, a hypothesis $H$ which is any true of false claim about a student. Let $\{e_{j},j=1,...,n\}$ be the possible observed outcomes from a task $E$ and let $W(H:e_{j})$ be the weight of evidence $e_{j}$ would provide for $H$. Then the expected weight of evidence that $E$ would provide for $H$ is $EW(H:E) = \sum_{j=1}^{n} W(H:e_{j})\Pr(e_{j} \mid H)$. Invalid BibTex Entry! F Four Processes Any assessment must have four different logical processes. The four processes that comprise the Assessment Cycle include the following: (1) The Activity Selection Process: the system responsible for selecting a task from the task library; (2) The Presentation Process: the process responsible for presenting the task to the examinee; (3) Response Processing: the first step in the scoring process, which identifies the essential features of the response that provide evidence about the examinee's current knowledge, skills, and abilities; (4) The Summary Score Process: the second stage in the scoring process, which updates beliefs about the examinee's knowledge, skills, and abilities based on the evidence provided by the preceding process. Instructions. Instructions are commands sent by the Activity Selection Process to the Presentation Process. G Graph? A mathematical graph (or network) is two coordinated sets $<V,E>$, where $V$ is a set of vertices or nodes and $E$ is a set of edges which consists of pairs of nodes. In a simple graph, the edges are considered unordered, and in a directed graph the edges are ordered pairs. In a hypergraph, the notion of edge is extended to allow any positive number of nodes. In a graphical model or a Bayesian network, the nodes in the graph correspond to variables in the problem space and the edges describe the factorization structure and conditional independence properties in the joint probability distribution. Graphical Model A representation of the joint probability distribution using a graph where (1) the variables in the model are represented by nodes in the graph, and (2) separation in the graph (d-separation if the graph is directed), implies that the variables are conditionally independent. The term is sometimes used generically to refer to both representations using both undirected and directed (i.e., Bayesian networks) graphs, and sometime specifically to refer only to representations on undirected graphs. In that case, the joint probability distribution can be represented as the product of a collection of potentials over the cliques of the graph. H Hyper-Dirichlet Law This is the natural conjugate distribution for the parameters of a conditional multinomial distribution. It is essentially an independent Dirichlet distribution for each row of the conditional probability table. The usage in this book is slightly different from the definition in Spiegelhalter and Lauritzen (1990)Invalid BibTex Entry! where it refers to a Bayesian network with hyper-Dirichlet distributions for every conditional probability table. Hyperedge A generalized edge used in a hypergraph. Unlike ordinary Graphs?, where an edge must have exactly two nodes, a hyperedge can contain one, two, or more nodes. Undirected hyperedges are often drawn as a closed curve containing all of the nodes. In a directed hyperedge, the nodes in the edge are partitioned into a set of parents and children. Directed hyperedges are often drawn as a symbol with tentacles (arrows) connecting the symbol to the nodes (the direction of the arrow indicates whether the node is a parent or a child. When a hypergraph is used to represent the factorization structure of a joint probability distribution, the hyperedge is represents one factor. The symbol used to represent a directed hyperedge can be based on way the corresponding conditional probability distribution is parameterized. Hypergraph A generalization of a Graph? in which the notion of edge is extended to allow one, two, or more nodes. The resulting edges are called hyperedges. Like ordinary graphs, hypergraphs come in directed and undirected versions. Both are useful for representing the factorization structure of a joint probability distribution over many variables. In the hypergraph the edges of the graph correspond to the factors in the model. Hyperparameter The parameters of a law providing the distribution for other parameters. In Bayesian inference, the parameters of interest are assigned a prior law, and the hyperparameters of that law must be elicited from an expert. If the hyperparameters are themselves assigned a prior law, the parameters of that distribution are also called hyperparameters (an so forth up the hierarchy). I Item Response Theory (IRT) Independence Two events are said to be independent if knowledge about Event A provides no information about Event B. In terms of probability, this implies: $\Pr(A|B) =\Pr(A|\neg B) = \Pr(A)$. In this situation, the joint probability of Event A and Event B is the product of the individual probabilities: $\Pr(A,B) = \Pr(A)\Pr(B)$. Indirect Evidence Evidence that is not about a targeted proficiency variable, but rather evidence about a proficiency variable that is highly correlated with the targeted proficiency in the target population. For example, the ability to produce written text fluently is highly correlated with critical thinking skills in writing. Therefore, text length in a timed essay provides indirect evidence about critical thinking skills. Influence Diagram An extension of a Bayesian network that allows the following additional features: (a) decision variables whose values can be set by the decision maker and (b) utilities which provide the values of potential outcomes and the costs of potential decisions. The solution to an influence diagram is a decision rule or policy for setting the decision variables (given the observed values of other variables which are available at the time of the decision) to maximize expected utility. Invalid BibTex Entry!, Invalid BibTex Entry! Inhibitor Distribution A design pattern for conditional probability tables with two parent variables in which one parent is thought of as a valve or gate keeper for the others. Unless the value of the inhibiting value has reached a certain level, then conditional probability tables are the same as if the participant was at the lowest level of the other skill. After reaching the threshold value, more of the inhibiting skill does not contributed to the outcome. For example, when solving mathematical word problems, proficiency in the language of the assessment is an inhibitor skill: a certain minimal level is necessary to understand the problem, but beyond that more does not help. J Junction Tree A re-expression of a Bayesian network into a tree shape such that each node in the tree represents a group of variables that either form a clique in the original network (possible after additional edges have been filled-in to make the graph triangulated) or an intersection of one or more cliques. The junction tree is a Markov Tree and many probability calculations in a Bayesian network can be expressed as message passing algorithms in a junction tree. The computational complexity of those calculations is usually driven by the size of the largest node in the junction tree which is known as its treewidth. The process of building a junction tree from a Bayesian network is often called compiling the network. K L Law Another word for distribution. Stephan Lauritzen suggests that do avoid ambiguity when describing Bayesian networks that the term distribution be used for the conditional probability tables in the network, and the term law be used for the prior/posterior distributions for the parameters of those distributions, as well as for any higher-level distribution for the hyperparameters. Learning This word can be used in two different ways. Student learning refers to how the knowledge, skills and abilities of an individual change over time, in particular in response to instruction. Parameter learning refers to the act of Bayesian inference, that is using data about a system to replace prior laws for model parameters with posterior laws. Likelihood The conditional probability of observable evidence given a hypothesized state and set of parameters. This is often written $\Pr(X|\theta)$ where $X$ represents the observable evidence, and $\theta$ is a parameter or unknown variable. A popular alternative to Bayesian inference involve choosing a parameter value to maximize the likelihood function, however, this involves the implicit assumption that all parameter values are equally likely a priori. A task-specific version of an evidence model. Consider a collection of tasks all made from the same task model, and an evidence model that is used to evaluate evidence from work products from this task model. The individual tasks may vary slightly in their difficulty or other properties. The links are task-specific version of the evidence model that share the same graphical structure and distribution families, but differ in the parameter values. Local Independence This is a desirable property of educational assessments that observable variables from distinct tasks should be conditionally independent given the proficiency variables. Note that the Bayesian network formulation uses a more relaxed version of this assumption than other measurement model frameworks, as activities that produce observables that locally dependent observables can be placed within the same task and hence, the dependence among the observables can be modeled within the evidence model. Invalid BibTex Entry!. M Maximum A Posterior (MAP) Score Maximum Likelihood Marginal Distribution Markov Chain Monte Carlo (MCMC) Markov Tree A transformed version of a Bayesian network that has nodes corresponding to sets of variables in the original model, and whose nodes are connected to have the running intersection property: the set of nodes containing any given variable forms a connected subtree. Many probability calculations within the Bayesian network can be expressed as passing messages in this tree. The two most common examples of a Markov tree are the junction tree and the tree of cliques. Measurement Model The Measurement Model is that part of the Evidence Model that explains how the scoring should be updated given the observed features of the response. Model. A Model is a design object in the CAF that provides requirements for one or more of the Four Processes, particularly for the data structures used by those processes (e.g., Tasks and Scoring Records). A Model describes variables, which appear in data structures used by the Four Processes, whose values are set in the course of authoring the tasks or running the assessment. Multinomial Distribution N Neighborhood Network Another name for a Graph?. Normalization Constant O Observables/Observable Outcome Variables Observables are variables that are produced through the application of Evidence Rules to the task Work Product. Observables describe characteristics to be evaluated in the Work Product and/or may represent aggregations of other observables. Observation An Observation is a specific value for an observable variable for a particular participant. Outcome Pattern/Vector Outcome Space P Parent Parameter Participant A Participant is the person whose skills are being assessed. A Participant directly engages with the assessment for any of a variety of purposes (e.g., certification, tutoring, selection, drill and practice, etc.). Platform Platform refers to method that will be used to deliver the presentation materials to the examinees. Platform is defined broadly to include human, computer, paper and pencil, etc. Posterior Distribution Posterior Predictive Distribution Potential Presentation Material Presentation Material is material that is presented to a participant as part of a task (including stimulus, rubric, prompt, possible options for multiple choice). Presentation Process The Presentation Process is the part of the Assessment Cycle that displays the task to the examinee and captures the results (or Work Products) when the examinee performs the task. Pretest Data Prior Distribution Proficiency (Variable) Proficiency Model Profile Score Q Q-Matrix R Rasch Model Reliability Reporting Rule Reporting Rules describe how Student Model Variables should be combined or sampled to produce scores, and how those scores should be interpreted. Response. See Work Product Response Processing Response Processing is the part of the Assessment Cycle that identifies the essential features of the examinee's response and records these as a series of Observations. At one time referred to as the "Evidence Identification Process," it emphasizes the key observations in the Work Product that provide evidence. Response Processing Data See Evidence Rule Data. S Scoring Model Sensitivity Analysis Simple Graph Simulee Strategy Strategy refers to the overall method that will be used to select tasks in the Assembly Model. Student Model The Student Model is a collection of variables representing knowledge, skills, and abilities of an examinee about which inferences will be made. A Student Model is comprised of the following types of information: (1) Student Model Variables that correspond to aspects of proficiency the assessment is meant to measure; (2) Model Type that describes the mathematical form of the Student Model (e.g., univariate IRT, multivariate IRT, or discrete Bayesian Network); (3) Reporting Rules that explain how the Student Model Variables should be combined or sampled to produce scores. Summary Scoring Process The Summary Scoring Process is the part of the Assessment Cycle that updates the scoring based on the input it receives from Response Processing. At one time referred to as the "Evidence Accumulation Process," the Summary Scoring Process plays an important role in accumulating evidence. T A Task is a unit of work requested from an examinee during the course of an assessment. In ECD, a task is a specific instance of a Task Model. The Task/Evidence Composite Library is a database of task objects along with all the information necessary to select and score them. For each such Task/Evidence Composite, the library stores (1) descriptive properties that are used to ensure content coverage and prevent overlap among tasks; (2) specific values of, or references to, Presentation Material and other environmental parameters that are used for delivering the task; (3) specific data that are used to extract the salient characteristics of Work Products; and (4) Weights of Evidence that are used to update the scoring from performances on this task, specifically, scoring weights, conditional probabilities, or parameters in a psychometric model. A Task Model is a generic description of a family of tasks that contains (1) a list of variables that are used to describe key features of the tasks, (2) a collection of Presentation Material Specifications that describe material that will be presented to the examinee as part of a stimulus, prompt, or instructional program, and (3) a collection of Work Product Specifications that describe the material that the task will be return to the scoring process. Task Model Variables describe features of the task that are important for designing, calibrating, selecting, executing, and scoring it. These variables describe features of the task that are important descriptors of the task itself, such as substance, interactivity, size, and complexity, or are descriptors of the task performance environment, such as tools, help, and scaffolding. Testlet Tree Treewidth Triangulated U Utility V Validity Value of Information Variable Vertex W Weak prior Weight of Evidence Work Product A Work Product is the Examinee's response a task from a given task model. This could be expressed as a transcript of examinee actions, an artifact created by the examinee and/or other appropriate information. The Work Product provides an important bridge between the Task Model and the Evidence Model. In particular, work products are the input to the Evidence Rules. Z Y Z • You can create a Wiki page for any term in this list by wrapping its name in [[ ]] . End lines with \ to get the line wrapping to look right.
Apologies to all of you who were eagerly waiting for their fortnightly links, I was feeling sick and didn't feel like writing this one. On the other hand, on March 18 this year I announced that I wanted to do something with Fano 3-folds, à la superficie.info, and in a few days I will release what I came up with. • Ivan Cheltsov, Victor Przyjalkowski, Constantin Shramov: Fano threefolds with infinite automorphism groups is a paper I didn't know was in progress, but that I've been waiting for. I had computed some odd cases myself, but it's great to see all of them done carefully. And regarding the announcement from the introduction: I have incorporated all data on automorphism groups, so the information from this preprint will be included. • Ivan Cheltsov, Victor Przyjalkowski: Katzarkov-Kontsevich-Pantev conjecture for Fano threefolds is a monumental paper, checking a conjecture regarding the behavior of Hodge numbers in mirror symmetry. In homological mirror symmetry for Fano varieties, there is a correspondence between the Fano variety and a Landau–Ginzburg mirror, a morphism $w\colon Y\to\mathbb{A}^1$ (or compactified to $Y\to\mathbb{P}^1$), such that • the derived category of $X$ corresponds to the Fukaya–Seidel category of $w$ • the Fukaya category of $X$ corresponds to the category of matrix factorisations of $w$ As the derived category of $X$ in the Fano case completely determines $X$ (actually, it is conjectured that Hodge numbers are derived invariants) it is therefore interesting to see whether one can recover these numbers from the symplectic geometry of $w$. The conjecture suggests a method to compute Hodge numbers on the mirror side, and this is what is checked case-by-case for all Fano 3-folds, by computing $\mathrm{h}^{1,2}$ and $\mathrm{h}^{1,1}=\operatorname{rk}\operatorname{Pic}(X)$ in this way. • Joseph Karmazyn, Alexander Kuznetsov, Evgeny Shinder: Derived categories of singular surfaces is a paper I did know was in progress, and that I've been waiting for. This is as far as I know (one of the) first papers in which a systematic study of semiorthogonal decompositions in a singular setting is done, where one has to be much more careful about functors as not everything automatically has all adjoints. The smooth and proper setting really is a wonderland compared to the real world. This is a very interesting read!
# Alternate Angles A transversal is a line that intersects with two or more other lines. When transversals cut lines, they form many different pairs of angles. In this article, we're interested in alternate angles, which are sometimes also called alternate interior angles. ## Locating Alternate Angles In the picture, the two lines are blue, and the transversal is orange. Alternate angles lie between the two lines, but on either side of the transversal. The picture shows two pairs of alternate angles, one marked by red arrows, and the other marked by green arrows. Can you see how they are the angles inside a "z"-shape? The "z"-shape for the red angles runs along the top blue line from left to right up to the transversal, down the transversal and then from left to right along the bottom blue line. The "z"-shape for the green arrows is back-to-front: from right to left across the top line, down the transversal and from right to left along the bottom blue line. ## Alternate Angles and Parallel Lines If the transversal cuts a pair of parallel lines, the alternate angles that are formed are equal In the picture, the two blue lines are parallel, so the transversal forms equal alternate angles. Both alternate angles marked in green are equal to $60^\circ$. They are the angles inside a "z". In the picture, the two blue lines are parallel, so the transversal forms equal alternate angles. Both alternate angles marked in green are equal to $120^\circ$. They are the angles inside a back-to-front "z". In the picture, the two blue lines are parallel, so the transversal forms equal alternate angles. The angles marked in pink are alternate angles, so they must be equal, and $x$ must equal $30^\circ$. ## A Test for Parallel Lines Two lines, cut by a transversal, are parallel if the alternate angles are equal. So, if I hadn't told you that the two blue lines in this picture were parallel, you could work it out because the two alternate angles marked are equal. Having equal alternate angles is a test for parallel lines. If the alternate angles aren't equal, then the lines aren't parallel. If the alternate angles are equal, then the lines are parallel. ### Description In this mini book, you will learn about 1. Alternate angles 2. Cointerior angles 4. Vertically Opposite Angles and several other topics related to lines and angles. ### Audience Year 10 or higher ### Learning Objectives To learn the basics of Lines and Angles stream of Geometry Author: Subject Coach Added on: 28th Sep 2018 You must be logged in as Student to ask a Question. None just yet!
### Previous Year Paper Solution | Indian Economic Service Exam 2010 | General Economics - I #### Q. No. 1 (a) - Define consumer's surplus. Derive an expression for it using integral calculus. (Comment for solution.) #### Q. No. 1 (b) - Why is short-run average cost curve U-shaped? Show that marginal cost curve intersects the average cost curve at latter's minimum point. (Comment for solution.) #### Q. No. 1 (c) - Compare long-run equilibrium of the firm under perfect competition with that under monopolistic competition using suitable diagram. (Comment for solution.) #### Q. No. 1 (d) - What is a social welfare function? State the underlying assumption in its formulation. (Comment for solution.) #### Q. No. 1 (e) - State and explain the assumptions of two variable linear regression model. Answer to this question is available in Q. No. General Economics - I Previous Year Paper Solution 2013. Assumption of regression and OLS are same because regression is generally done using OLS method. #### Q. No. 1 (f) What is log-normal distribution? Where is it used in economic analysis? (Comment for solution.) #### Q. No. 2 - Derive consumer's expenditure function by minimizing total expenditure; $$y = {p_1}{x_1} + {p_2}{x_2}$$ subject to utility constraint $$\bar u = {q_1}{q_2}$$. (Marks - 15) Expenditure funtion can be derived if we know either indirect utility function or the compensated demand curve of the two goods. Let us first derive the the compensated demand curve for the two goods. Here objective is to minimize total expenditure subject to utility constraint. We can write Lagrangian function as follows: $L = {p_1}{q_1} + {p_2}{q_2} + \lambda (\bar u - {q_1}{q_2})$ First order condition for minimization is: $\frac{{\partial L}}{{\partial {q_1}}} = \frac{{\partial L}}{{\partial {q_2}}} = \frac{{\partial L}}{{\partial \lambda }} = 0$ Taking partial derivative of the Lagransian function with respect to q1 and equating to zero, we get: $\frac{{\partial L}}{{\partial {q_1}}} = 0$ ${p_1} - \lambda {q_2} = 0$ $\lambda = \frac{{{p_1}}}{{{q_2}}}$ Similarly, taking partial derivative of the Lagransian function with respect to q2 and equating to zero, we get: $\frac{{\partial L}}{{\partial {q_2}}} = 0$ ${p_2} - \lambda {q_1} = 0$ $\lambda = \frac{{{p_2}}}{{{q_1}}}$ We can equate the values of $$\lambda$$: $\frac{{{p_1}}}{{{q_2}}} = \frac{{{p_2}}}{{{q_1}}}$ Rearranging the equation, we get: ${q_2} = \frac{{{p_1}{q_1}}}{{{p_2}}}$ Sustituting this value of $${q_2}$$ in the utility constraint, we get: $\bar u = {q_1} \times \frac{{{p_1}{q_1}}}{{{p_2}}}$ Rearranging: $\bar u = q_1^2 \times \frac{{{p_1}}}{{{p_2}}}$ ${q_1} = \sqrt {\bar u\frac{{{p_2}}}{{{p_1}}}}$ This is the compensated demand function for Good - 1. Sustituting this value of $${q_1}$$ in the utility constraint, we get: $\bar u = \sqrt {\bar u\frac{{{p_2}}}{{{p_1}}}} \times {q_2}$ Rearranging: ${q_2} = \sqrt {\bar u\frac{{{p_1}}}{{{p_2}}}}$ This is the compensated demand function for Good - 2. Now, we can find expenditure function by substituting the compensated demand functions in the objective function as follows: $y = {p_1}{q_1} + {p_2}{q_2}$ $y = {p_1} \times \sqrt {\bar u\frac{{{p_2}}}{{{p_1}}}} + {p_2} \times \sqrt {\bar u\frac{{{p_1}}}{{{p_2}}}}$ $y = \sqrt {\bar u{p_1}{p_2}} + \sqrt {\bar u{p_1}{p_2}}$ $y = 2\sqrt {\bar u{p_1}{p_2}}$ This is the expenditure function, we want to find. It can also be written us: $y = 2{{\bar u}^{\frac{1}{2}}}p_1^{\frac{1}{2}}p_2^{\frac{1}{2}}$ Solution video: #### Q. No. 3 - Draw consumer's indifference curve from revealed Preference Theory. (Comment for solution.) #### Q. No. 4 - Separate income effect from substitution effect of a price change for a Giffen type good. Use suitable diagram. (Comment for solution.) #### Q. No. 5 - What is elasticity of factor substitution? Give various forms of production function based on this concept. (Comment for solution.) #### Q. No. 6 - "Asymmetric or incomplete information leads to market failure." Examine lemons' problem in the above context with the help of pricing of used cars. (Comment for solution.) #### Q. No. 7 - What is Hicks-Kaldor criterion of compensation? What are its weaknesses? Give Scitovsky's suggestion for improvement. (Comment for solution.) #### Q. No. 8 - Distinguish between positive and negative externalities and explain with examples. Why does government provide some goods which are not public goods? (Comment for solution.) #### Q. No. 9 - What are type I and type II errors? Why is probability of type I error fixed in a hypothesis testing problem? (Comment for solution.) Solution video: #### Q. No. 11 - What is Peak-load Pricing? How is it different from third degree price discrimination? Give diagrams to illustrate your answer. (Comment for solution.) #### Q. No. 12 - Define production function. The production function for a product is given by Q = 100KL. If price of capital (K) is $120 per day and that of labour (L) is$ 30 per day, what is the minimum cost of producing 400 units of output? (Comment for solution.) #### Q. No. 13 - (Comment for solution.) 1. I substituted the value of $${q_1}$$ into the utility constraint. It already contains the utility constraint so it is like to appear on both size. It is acceptable process in mathematics which we call substitution method.
# Proof of $\lim\sup(a_nb_n)\leq \lim\sup(a_n)\limsup(b_n)$ Let $a_n>0$ and $b_n\geq 0$, then $\lim\sup(a_nb_n)\leq \lim\sup(a_n)\limsup(b_n)$ My attempt at a proof is as follows. Let $A_n=\sup\{a_n, a_{n+1},…\}$, $B_n=\sup\{b_n, b_{n+1},…\}$, and $C_n=\sup\{a_nb_n, a_{n+1}b_{n+1},…\}$. Note: $a_mb_m \leq A_nB_n$ for all $m \geq n$. Thus $\limsup(a_nb_n)=\lim C_n \leq \lim (A_nB_n) = (\lim A_n)(\lim B_n) = (\limsup a_n)(\limsup b_n).$ #### Solutions Collecting From Web of "Proof of $\lim\sup(a_nb_n)\leq \lim\sup(a_n)\limsup(b_n)$" Missing pieces: 1. You need to assume the sequences are bounded. Otherwise the statement may be even nonsensical: how to interpret $1\le 0\cdot \infty$, for example? 2. $(A_n)$ and $(B_n)$ are nonnegative nonincreasing sequences; these properties imply the existence of the limits $\lim A_n$ and $\lim B_n$, which allows to conclude about the existence and the value of $\lim (A_nB_n)$. You could have assumed $a_n\ge 0$ instead of $a_n>0$.
Solve the system using 2 methods. x+2y=6 2x+6y=8 hala718 | Certified Educator calendarEducator since 2008 starTop subjects are Math, Science, and Social Sciences x+2y=6...(1) 2x+6y=8.....(2) First let us multiply (1) with -2 and add to (2) -2x-4y=-12 2x+6y=8 ==> 2y=-4 ==> y=-2 ==> x= 6-2y = 6-2(-2)= 6+4=10 check Approved by eNotes Editorial Related Questions neela | Student To solve x+2y=6...(1) and  2x+6y=8, using two methods: Solution: Substitution method: From (2) we get: 2x = 8-6y Or x = (8-6y)/2 =4-3y. We substitute x = 4-3y in the equation (1): 4-3y+2y = 6. Or 4-y = 6. Or 4-6 = y. Or y = -2. Substituting y= -2 in (1), we get: x+2(-2) = 6. Or x= 6+4 = 10. Elemination method: 2y appears in eq (1) and 6y in eq (2). So 3*eq (1)- eq(2) eliminates y: 3*Eq(1) - eq(2): 3(x+2y)-(2x+6y) = 3*6-8 =10.Or 3x-2x = 10. Or x = 10. Using x = 10, in (2), we get: 10*2 - 6y = 8. Or 6y = 8-20. Or 6y = -12. Or y = -12/6 = -2. giorgiana1976 | Student The first method is the substitution method. We'll write one variable depending on the other one. From the first equation, we'll choose to write x=6-2y Now we'll substitute x from the second equation, by it's expression and we'll get an expression in y. 2(6-2y)+6y=8 12-4y+6y=8 12+2y=8 We'll subtract 12 both sides: 2y=8-12 2y=-4 y=-2 Now, we'll substitute y in the x expression. x=6-2y x=6+4 x=10 The second method is to reduce one variable from the both equations of the system. We'll reduce x and for this reason, we'll multiply the first equation by -2 and we'll get. -2x-4y=-12 Now we'll add the new equation to the second one. -2x-4y+2x+6y=-12+8 2y=-4 y=-2 x=10
You are not logged in. Please login at www.codechef.com to post your questions! × # Overall Rating Issue 0 I have seen many users who have: lower/NA ratings in LONG contest than mine. lower/NA ratings in SHORT contest than mine. lower/NA ratings in LUNCHTIME contest than mine. Still, their overall rating is much higher than mine. Also, while my SHORT contest rating increased by by a few points in the last (October Cook-off 2017) challenge, the overall rating decreased. Could anyone explain this behaviour? asked 25 Oct '17, 00:14 311●1●8 accept rate: 8% 1 Overall rating is not a "direct average" or a "Simple relationship" with the 3 ratings of Long, Cook-off and Lunchtime. Since its affected after every contest, its more...lets say, dynamic. You may perform extremely good in 2 long and lunchtime but not in cook-ff. That doesnt mean your rating should have an upper bound of your cook-off rating. Also, while my SHORT contest rating increased by by a few points in the last (October Cook-off 2017) challenge, the overall rating decreased. Thats because you performed really well in LONG contest and on that basis a better overall rank was expected from you. How it goes is, Cookoff rating give an expected rank X, overall rating give an expected rank Y. Now, what happens when you score a rank between this X and Y? Your rating for one will decrease and other will increase. Thats exactly what happened. answered 25 Oct '17, 00:27 11.7k●1●3●19 accept rate: 18% Nice explanation @vijju123 (25 Oct '17, 01:08) spp____1★ Okay, I get what you are trying to say. Still, itsn't it ironic that people with all lower rating have higher overall rating? No Algorithm should be designed that way. What you are saying is that I performed better in past all challenges so I should've performed better in the next one too, and thats why the rating fell. That's OK. But, the person who is higher in overall ratings must have done poorly somewhere to get lower individual ratings? It is a bit confusing, though. Could you provide a simple illustration or example. (25 Oct '17, 01:14) Like, a user (https://www.codechef.com/users/tarun_1909), has the same long rating as me. Still I have less overall rating. Explain this somehow. (25 Oct '17, 01:18) Doesn't this makes the overall rating specific to a user. I mean you can't compare two users based on just their overall ratings? (25 Oct '17, 01:42) @spp____ Thanks for appreciation. (25 Oct '17, 01:45) @rishabh.jain9196 - Saying if person X is better than person Y since he has a better overall rating is...well not always right either. I think I am missing something in interpreting it. Whats wrong with rating being closer/more specific to you? Is your question on "people cannot compare 2 users" or "codechef cannot compare 2 users [and hence cant find expected rank etc.]" ?? (25 Oct '17, 01:47) Guess, have to accept it "as it is", now. :-p (25 Oct '17, 01:50) showing 5 of 7 show all 0 "No Algorithm should be designed that way." Without knowing the algorithm and its purpose, you have no claim on here. "What you are saying is that I performed better in past all challenges so I should've performed better in the next one too, and thats why the rating fell." Yes. "But, the person who is higher in overall ratings must have done poorly somewhere to get lower individual ratings?" Or perhaps he didnt participate in that type of contest. Look, take it this way. Overall rating is changed at every rated contest you participate in. In a month, assuming you give 100% participation, it changes 3 times. After every time, it gets fresh statistics like volatility, expected rank etc. In this way, its more "recent" or "updated". But contest specific ratings change only once a month, and your overall rating changes twice (100% participation) before that contest is held again. So in a way, this is a bit more "outdated" and more dependent on how you were back then rather than how you are now. And this ought to answer why there are variations in overall rating and rating per contest. Ultimately the difference is a result of frequency of changes and differences in statistics- since contest specific ratings are only concerned with how you do in one type of contest. "Could you provide a simple illustration or example." For something as complex as this, for which even I didnt go into those minute details, I cant. Sorry for that. answered 25 Oct '17, 01:34 11.7k●1●3●19 accept rate: 18% No Algorithm should be designed that way." -Thats my take on it. No hard feelings. Not looking for a fight here, lol. :-p Although, I see what are you trying to say. But somehow that doesn't "feel" correct. I guess discussing it still won't change it. Still, You somehow conviced me on the rating drop issue, so that's good! (25 Oct '17, 01:47) Hey, I understand thats your take on it. I re-read it, it sounds a bit that way because of the formal tone in which I wrote it (w/o realizing it lol). The only thing I wanted to convey was, that we cannot actually comment on that thing before being in @admin 's shoes (or high-heels :p). Can you pin-point what doesnt feel right? If you can clearly point it out, I can make/convince @admin to give a reply. Thanks! :) (25 Oct '17, 01:50) I got your point, thanks. (25 Oct '17, 02:18) toggle preview community wiki: Preview ### Follow this question By Email: Once you sign in you will be able to subscribe for any updates here Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×502 ×323 ×127 question asked: 25 Oct '17, 00:14 question was seen: 241 times last updated: 25 Oct '17, 02:18
# Absolute value In this section you'll learn how to the find the absolute value of integers. 4 - 0 = 4 4 - 1 = 3 4 - 2 = 2 4 - 3 = 1 4 - 4 = 0 4 - 5 = -1 In this pattern you can see that 4 - 5 is equal to a negative number. A negative number is a number that is less than zero (in this case -1). A negative number is always less than zero, 0. We can study this in a diagram by using two examples:  0 - 4 = -4 and -1 - 3 = -4 This kind of diagram is called a number line. There are some things that you need to observe when you draw and/or use a number line. Zero, is always in the middle and separates negative and positive numbers. On the left side of zero, you'll find numbers that are less than zero, the negative numbers. On the right side of zero, you'll find numbers that are greater than zero, the positive numbers. The absolute value is the same as the distance from zero of a specific number. On this number line you can see that 3 and -3 are on the opposite sides of zero. Since they are the same distance from zero, though in opposite directions, in mathematics they have the same absolute value, in this case 3. The notation for absolute value is to surround the number by straight lines as in the examples below. Example Simplify the following expression $\left | 3 \right |+\left | -3 \right |=?$ $=3+3=6$ ## Video lesson Evaluate the following expression $\left | x \right |+11,\: \:\: x=-9$
DEFM 311 Forum 1 # DEFM 311 Forum 1 L 0 points Describe how you think contracting fits into the acquisition program. Feel free to use some of your own experiences and examples. Contracting is an essential part of the acquisition program. Right now I work as a contracting specialist for the Army and in this post I will explain how it works in my organization to the best of my abilities. It starts with the customer having a need for a supply or service. After a determination has been made that one they cannot receive the supply for an example from another unit and the price exceeds the micro purchaser threshold and cannot be purchased with a government purchase card they then prepare a requirement package to submit to the contracting activity. Depending on the complexity and the price of the requirement determines what will be required... DEFM 311 LEOTIS82 L 0 points #### Oh Snap! This Answer is Locked Thumbnail of first page Excerpt from file: Describe how you think contracting fits into the acquisition program. Feel free to use some of your own experiences and examples. Contracting is an essential part of the acquisition program. Right now I work as a contracting specialist for the Army and in this post I will explain how it works in my Filename: 86059107.docx Filesize: < 2 MB Print Length: 2 Pages/Slides Words: 385 Surround your text in *italics* or **bold**, to write a math equation use, for example, $x^2+2x+1=0$ or $$\beta^2-1=0$$ Use LaTeX to type formulas and markdown to format text. See example.
# Inventory Turnover Ratio Inventory turnover is the ratio of cost of goods sold by a business to its average inventory during a given accounting period. It is an activity ratio measuring the number of times per period, a business sells and replaces its entire batch of inventory again. ## Formula Inventory turnover ratio is calculated using the following formula: Inventory Turnover = Cost of Goods Sold Average Inventory Cost of goods sold figure is obtained from the income statement of a business whereas average inventory is calculated as the sum of the inventory at the beginning and at the end of the period divided by 2. The values of beginning and ending inventory are obtained from the balance sheets at the start and at the end of the accounting period. ## Analysis Inventory turnover ratio is used to measure the inventory management efficiency of a business. In general, a higher value of inventory turnover indicates better performance and lower value means inefficiency in controlling inventory levels. A lower inventory turnover ratio may be an indication of over-stocking which may pose risk of obsolescence and increased inventory holding costs. However, a very high value of this ratio may be accompanied by loss of sales due to inventory shortage. Inventory turnover is different for different industries. Businesses which trade perishable goods have very higher turnover compared to those dealing in durables. Hence a comparison would only be fair if made between businesses of same industry. ## Examples Example 1: During the year ended December 31, 2010, Loud Corporation sold goods costing $324,000. Its average stock of goods during the same period was$23,432. Calculate the company's inventory turnover ratio. Solution Inventory Turnover Ratio = $324,000 ÷$23,432 ≈ 13.83 Example 2: Cost of goods sold of a retail business during a year was $84,270 and its inventory at the beginning and at the ending of the year was$9,865 and $11,650 respectively. Calculate the inventory turnover ratio of the business from the given information. Solution Average Inventory = ($9,865 + $11,650) ÷ 2 =$10,757.5 Inventory Turnover = $84,270 ÷$10,757.5 ≈ 7.83 Written by Irfanullah Jan
# Should cosecant be defined as $\csc \theta = \frac{1}{\sin \theta}$, specifying the constraint: $\sin \theta \neq 0$? I'm studying trigonometry on my own, and I keep noticing that the trigonometric functions are never defined with constraints to deal with divide-by-zero issues. As an example, I've seen cosecant defined like this: $\csc \theta = \frac{1}{\sin \theta}$ I've encountered this definition in the book I'm currently working through, and online resources 1 2 3, but as $\sin \theta$ has the range [-1, 1] shouldn't cosecant really be defined as: $\csc \theta = \frac{1}{\sin \theta}, \sin \theta \neq 0$? Am I missing something? • It is so obvious that no one writes it. – Kartik Aug 23 '15 at 10:46 Those are the same thing. Either it's undefined as part of the definition (i.e. part of $\mathbb{R}$ is not in the domain) or it's undefined as a consequence of $\frac{1}{0}$ being undefined. It's just like how $f(x)=\frac{1}{2-x}$ is completely clear without a note saying "but x can't be 2!" If a function is undefined, then the values for which it is undefined are not part of its domain by definition.
As part of the commissioning phase of the TROPOMI instrument on board of the ESA’s Sentinel-5P spacecraft, a dedicated radiometric measurement was put in place to estimate radiometric errors in Level 1B products based on S5P early in-flight calibration measurements. Simulations of atmospheric transmission spectra showed that the atmosphere is opaque at tangent altitudes below 15-10km, near the O2 A Band at 761 nm. Measurements in this range, taken in a so-called solar occultation geometry, could potentially be used to detect residual out-of-band stray light effects. In this post, we are going to show how to use Matplotlib to display an animated plot of the solar irradiance spectra detected during this dedicated solar calibration measurement, as a function of time – which roughly translates in decreasing limb tangent heights — and observe the strong O2 A Band absorption features occurring around 761 nm. Let’s import our favourite libraries and load the data from our input product. Our data are contained in a Sentinel-5P L1B product, containing both measurements and instrument data in a NetCDF4 file. Let’s open it and extract the variable we are interested in: • irradiance, that is the measured spectral irradiance for each spectral pixel and each associated spectral channel, across all scanlines (which is our time dimension). Irradiance is a measurement of solar power and is defined as the rate at which solar energy falls onto a surface. • calibrated_wavelength, a calibrated set of wavelengths derived from the comparison between the obtained irradiance measurement and a reference solar spectrum, which provides a per pixel best estimate for the wavelength actually measured by each individual spectral channel. The scanline relative timestamps can be derived by adding the delta_time values, expressed as milliseconds from the reference time, to a time_reference value. We are going to make use of the dateutil module to parse the time_reference attribute, which is an UTC time specified as an ISO 8601 date, and then build an array of relative time stamps that can be indexed by our scanlines. Reference time: 2017-11-19 00:00:00+00:00 ## Making sense of the data Let’s have a closer look at our irradiance variable: <class 'netCDF4._netCDF4.Variable'> _FillValue: 9.96921e+36 units: mol.m-2.nm-1.s-1 unlimited dimensions: current shape = (1, 2772, 504, 520) filling on Which, speaking human, tells us that for each of the 2772 scanlines, there are 504 pixels and 520 associated spectral channels. Each pixel in this tridimensional grid specifies an irradiance measurement represented as a float value. We want to “slice” the measurements around the area of interest: O2 A band at 761 nm, for example in the 756:772 nm range. Let’s consider a pixel – our reference pixel for the remainder of this post – in the centre of the swath, and its associated spectral channels, and let’s use NumPy’s where function to retrieve the spectral channel range enclosing the wavelengths we are interested in. Spectral channel range: (254, 384) ## Plotting the data Now that we know at which spectral channels we need to look at, let’s have a quick look at the irradiance level evolution across all scanlines, that is with respect to the spacecraft relative position to the sun. For that, we are going to make use of Matplotlib’s pcolormesh function. We want to focus our attention at what’s happening around scanline 2250, which seems to be around the time where the sun “sets” from the point of view of our instrument’s line of sight. We could then slice our original irradiance along the scanline and spectral channel axes. At the same time, we could improve our signal to noise ratio, by averaging measurements around our reference pixel. This will reduce our total number of dimensions by one. We could also slice our calibrated_wavelength array around our reference pixel. Let’s now plot our averaged irradiance for a subset of scanlines: We can clearly notice how the irradiance gets squashed as the sun gradually drops out of sight. Also, the effect of the O2 absorption is distinctly visible around 761nm. How cool would it be to animate this sequence? ## Animating the data It turns out that we can easily setup animations in Matplotlib: we just need to set up our plot first, and then define a function specifying the frame to be plotted at the ith step in the animation. Next, we are going to pass the just defined animate function to FuncAnimation, which is Matplotlib’s function used to make an animation by repeatedly calling a function. We need to specify the number of frames and the delay between frames in milliseconds. The blit parameter tells the function to re-draw only the parts of the frame that have changed. Next, we are going to make use of IPython’s inline display tools to display our animation within the Jupyter notebook. We can now save our animation. How can we add a bit of life in our animation? What about animating a pcolormesh plot of the irradiance over a rolling window of a given number of scanlines? Now that would be cool! We are going to start by defining a few parameters defining the scope of the animation, and by retrieving our original irradiance variable and re-slicing it to increase the scanline range. We defined a rectangle of 200 scanlines over our irradiance pcolormesh plot, and we are going to “roll” it over the plot until it plunges in total darkness. Let’s setup our plot. The magic behind the re-drawing of each frame lies in the set_array function, which redefines the QuadMesh data structure underlying the pcolormesh plot. Because of how the pcolormesh creates the QuadMesh (see definition here), we need to subtract one from each dimension, to make dimensions fit together — this answer on StackOverflow helped me a great deal in understanding why my plot looked funny in my first attempts. Let’s bring the animation to life and display it in our notebook. We can now save the video and send it back home, now that’s something that’s going to make our parents proud! That’s it for now! I hope you enjoyed this post, feel free to comment if you have questions or remarks. The Jupyter notebook for this post is available on my GitHub repository.