text
stringlengths
100
957k
meta
stringclasses
1 value
#### 题目列表 In the 1860s, the German philologist Lazarus Geiger proposed that the subdivision of color always follows the same hierarchy. The simplest color lexicons (such as the DugermDani language of New Guinea) distinguish only black/dark and white/light. The next color to be given a separate word by cultures is always centered on the red part of the visible spectrum. Then, according to Geiger, societies will adopt a word corresponding to yellow, then green, then blue. Lazaruss color hierarchy was forgotten until restated in almost the same form in 1969 by Brent Berlin, an anthropologist, and Paul Kay, a linguist, when it was hailed as a major discovery in modern linguistics. It showed a universal regularity underlying the apparently arbitrary way language is used to describe the world. Berlin and Kays hypothesis has since fallen in and out of favor, and certainly there are exceptions to the scheme they proposed. But the fundamental color hierarchy, at least in the early stages (black/white, red, yellow/green, blue) remains generally accepted. The problem is that no one could explain why this ordering of color exists. Why, for example, does the blue of sky and sea, or the green of foliage, not occur as a word before the far less common red? There are several schools of thought about how colors get named. "Nativists,"who include Berlin and Kay, argue that the way in which we attach words to concepts is innately determined by how we perceive the world. In this view our perceptual apparatus has evolved to ensure that we make"sensible"-that is, useful-choices of what to label with distinct words: we are hardwired for practical forms of language."Empiricists,"in contrast, argue that we dont need this innate programming, just the capacity to learn the conventional (but arbitrary) labels for things we can perceive. In both cases, the categories of things to name are deemed"obvious": language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the"culturist"view, which says that shared communication is needed to help organize category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for color terms is not some inevitably distinct block of the spectrum, but neither do we just divide up the spectrum in some arbitrary fashion, because the human eye has different sensitivity to different parts of the spectrum. Given this, we have to arrive at some consensus, not just on which label to use, but on what is being labeled. How does the culturist view relate to the nativist and empiricist views? State park officials recently released a report urging hikers in Rockridge Mountain Park to exercise caution during the months of April and May. According to the report, the number of mountain lion sightings in the park reaches its peak in the months of April and May. All of the following could account for the increased number of mountain lion sightings EXCEPT The DNA molecule is composed of subunits called base pairs, which are two smaller subunits bonded together, forming part of a genetic message. In our bodies every individual cell has one billion base pairs. It is unlikely that all of these base pairs, making up what scientists call an entire genome, could be extracted from fossil remains. Even if they could, they would still need to be assembled into an ordered, structured genome. At present, isolating and organizing the DNA into an entire genome for a fossil animal is impossible. We cannot create carbon copies of organisms that are alive today, even if we have the entire genome in its correct order. Before cloning becomes possible, much must be learned about translating the information in the genome into a living, breathing organism. The primary purpose of the passage is to Among the classes at City High School, Mrs. Baxters class has the greatest percent of students who are taller than six feet. Mr. Pendeltons class, however, has the greatest percent of students who are 64 and taller. If the statements above are true, then which of the following must also be true? People associate global warming with temperature, but the phrase is misleading-it fails to mention the relevance of water. Nearly every significant indicator of hydrological activity-rainfall, snowmelt, glacial melt-is changing at an accelerating pace (one can arbitrarily pick any point of the hydrological cycle and notice a disruption). One analysis pegged the increase in precipitation at 2 percent over the century. In water terms this sounds auspicious, promising increased supply, but the changing timing and composition of the precipitation more than neutralizes the advantage. For one thing, it is likely that more of the precipitation will fall in intense episodes, with flooding a reasonable prospect. In addition, while rainfall will increase, snowfall will decrease. Such an outcome means that in watersheds that depend on snowmelt, like the Indus, Ganges, Colorado river basins, less water will be stored as snow, and more of it will flow in the winter, when it plays no agricultural role; conversely, less of it will flow in the summer, when it is most needed. One computer model showed that on the Animas River an increase in temperature of 3.6 degrees Fahrenheit would cause runoff to rise by 85 percent from January to March, but drop by 40 percent from July to September. The rise in temperature increases the probability and intensity of spring floods and threatens dam safety, which is predicated on lower runoff projections. Dams in arid areas also may face increased sedimentation, since a 10 percent annual increase in precipitation can double the volume of sediment washed into rivers. The consequences multiply. Soil moisture will intensify at the highest northern latitudes, where precipitation will grow far more than evaporation and plant transpiration but where agriculture is nonexistent. At the same time, precipitation will drop over northern mid-latitude continents in summer months, when ample soil moisture is an agricultural necessity. Meanwhile the sea level will continue to rise as temperatures warm, accelerating saline contamination of freshwater aquifers and river deltas. The temperature will cause increased evaporation, which in turn will lead to a greater incidence of drought. Perhaps most disturbing of all, the hydrologic cycle is becoming increasingly unpredictable. This means that the last centurys hydrological cycle-the set of assumptions about water on which modern irrigation is based-has become unreliable. Build a dam too large, and it may not generate its designed power; build it too small, and it may collapse or flood. Release too little dam runoff in the spring and risk flood, as the snowmelt cascades downstream with unexpected volume; release too much and the water will not be available for farmers when they need it. At a time when water scarcity calls out for intensified planning, planning itself may be stymied. According to the passage, the likelihood that "dams in arid areas also may face increased sedimentation" will most likely result from Residents of Milatia are known for their longevity. Nutritionists maintain that the Milatians can attribute their increased lifespans to their diets. In addition to consuming a diet full of leafy greens, they also have a low intake of saturated fats, which have been implicated in heart disease and atherosclerosis. Therefore, if one wants to have increased longevity, he or she should follow a Milatia based diet. Which one of the following is an assumption on which the argument depends? The idea that all mental functions are derived from the brain originated with Hippocrates, but it was largely neglected until the late 18th century, when Franz Gall attempted to link psychology and brain science. Gall took advantage of what was already known about the cerebral cortex. He was aware that it was bilaterally symmetrical and subdivided into four lobes. However, he found that these four lobes were, by themselves, inadequate to account for the forty-odd distinct psychological functions that psychologists had characterized by 1790. As a result he began to analyze the heads of hundreds of musicians, actors, etc., relating certain bony elevations or depressions under the scalp to the predominant talent or defects of their owners. Based on his skull palpitation, Gall subdivided the cortex into roughly forty regions, each of which served as an organ for a specific mental function. While Galls theory that all mental processes derive from the brain proved to be correct, his methods for localizing specific functions were deeply flawed because they were not based on what we would now consider valid evidence. Gall did not test his ideas empirically by performing autopsies on the brains of patients and correlating damage to specific regions with defects in mental attributes; he distrusted the diseased brain and did not think it could reveal anything about normal behavior. Instead, he developed the notion that as each mental function is used, the particular area of the brain responsible for that function becomes enlarged. Eventually, a given area may become so bulky that it pushes out against the skull and produces a bump on the head. The author would agree with all of the following regarding Galls work EXCEPT? Demotic Greek (language of the people) is the modern vernacular form of the Greek language, and refers particularly to the form of the language that evolved naturally from ancient Greek, in opposition to the artificially archaic Katharevousa, which was the official standard until 1976. The two complemented each other in a typical example of diglossia, or the existence of two forms of a language (usually a "high" and a "low") employed by the same speaker depending on the social context, until the resolution of the Greek language question in favor of Demotic. Demotic is often thought to be the same as the modern Greek language, but these two terms are not completely synonymous. While Demotic is a term applied to the naturally evolved colloquial language of the Greeks, the modern Greek language of today is more like a fusion of Demotic and Katharevousa; it can be viewed as a variety of Demotic which has been enriched by "educated" elements. Therefore, it is not wrong to call the spoken language of today Demotic, though such a terminology ignores the fact that modern Greek contains - especially in a written or official form - numerous words, grammatical forms and phonetical features that did not exist in colloquial speech and only entered the language through its archaic variety. Additionally, even the most archaic forms of Katharevousa were never thought of as ancient Greek, but were always called "modern Greek" , so that the phrase "modern Greek" applies to Demotic, Standard Modern Greek and even Katharevousa. Regarding Demotic Greek, all of the following can be supported by the passage EXCEPT? Language acquisition has long been thought of as a process of imitation and reinforcement. Children learn to speak, in the popular view, by copying the utterances heard around them, and by having their response strengthened by the repetitions, corrections, and other reactions that adults provide. In recent years, it has become clear that this principle will not explain all the facts of language development. Children do imitate a great deal, especially in learning sounds and vocabulary; but little of their grammatical ability can be explained in this way. Two kinds of evidence are commonly used in support of this criticism – one based on the kind of language children produce, the other on what they do not produce. The first piece of evidence derives from the way children handle irregular grammatical patterns. When they encounter such irregular past-tense forms as went and took or such plural forms as mice and sheep, there is a stage when they replace these by forms based on the regular patterns of the language. They say such things as wented, taked, mices, mouses, and sheeps. Evidently, children assume that grammatical usage is regular, and try to work out for themselves what the forms ought to be – a reasoning process known as analogy. They could not have learned these forms by a process of imitation. The other kind of evidence is based on the way children seem unable to imitate adult grammatical constructions exactly, even when invited to do so. Which of the following casts doubt on the "popular view"? People associate global warming with temperature, but the phrase is misleading-it fails to mention the relevance of water. Nearly every significant indicator of hydrological activity-rainfall, snowmelt, glacial melt-is changing at an accelerating pace (one can arbitrarily pick any point of the hydrological cycle and notice a disruption). One analysis pegged the increase in precipitation at 2 percent over the century. In water terms this sounds auspicious, promising increased supply, but the changing timing and composition of the precipitation more than neutralizes the advantage. For one thing, it is likely that more of the precipitation will fall in intense episodes, with flooding a reasonable prospect. In addition, while rainfall will increase, snowfall will decrease. Such an outcome means that in watersheds that depend on snowmelt, like the Indus, Ganges, Colorado river basins, less water will be stored as snow, and more of it will flow in the winter, when it plays no agricultural role; conversely, less of it will flow in the summer, when it is most needed. One computer model showed that on the Animas River an increase in temperature of 3.6 degrees Fahrenheit would cause runoff to rise by 85 percent from January to March, but drop by 40 percent from July to September. The rise in temperature increases the probability and intensity of spring floods and threatens dam safety, which is predicated on lower runoff projections. Dams in arid areas also may face increased sedimentation, since a 10 percent annual increase in precipitation can double the volume of sediment washed into rivers. The consequences multiply. Soil moisture will intensify at the highest northern latitudes, where precipitation will grow far more than evaporation and plant transpiration but where agriculture is nonexistent. At the same time, precipitation will drop over northern mid-latitude continents in summer months, when ample soil moisture is an agricultural necessity. Meanwhile the sea level will continue to rise as temperatures warm, accelerating saline contamination of freshwater aquifers and river deltas. The temperature will cause increased evaporation, which in turn will lead to a greater incidence of drought. Perhaps most disturbing of all, the hydrologic cycle is becoming increasingly unpredictable. This means that the last centurys hydrological cycle-the set of assumptions about water on which modern irrigation is based-has become unreliable. Build a dam too large, and it may not generate its designed power; build it too small, and it may collapse or flood. Release too little dam runoff in the spring and risk flood, as the snowmelt cascades downstream with unexpected volume; release too much and the water will not be available for farmers when they need it. At a time when water scarcity calls out for intensified planning, planning itself may be stymied. The second paragraph supports which of the following? 1 2 ... 6 7 8 9 10 11 12 ... 15 16 25000 +道题目 8本备考书籍
{}
# Deconvoluting Convolutional Neural Networks Posted in Machine Learning # Introduction: A Simple CNN Example As part of our weekly Deep Learning for Genomics reading group here in the Lab for Data Intensive Biology (DIB Lab), we are applying convolutional neural networks (deep learning) to various problems in genomics and biology. For the most recent meeting, we prepared some notes on how convolutional neural networks work. The notes are in the form of a Jupyter notebook. This blog post summarizes some of the important conclusions from the notebook and links to relevant sections in the notebook. In the notebook covered in this blog post, we set up a simple convolutional neural network from an example on the keras blog. This example is used to classify input images as being either a cat or a dog. All materials covered in this blog post are in the charlesreid1/deconvoluting-convolutions repository on Github. # Exploring the Data TL;DR: When developing a deep learning model for a problem, it is important to start by exploring the data and understanding it thoroughly. Link to "Image Data" section of notebook # Create CNN TL;DR: Our convolutional neural network consists of the following architecture: • Convolutional Stage #1 • Convolution (3 x 3 kernel, 32 filters) • Activation (ReLU) • Max Pooling (2x2) • Convolutional Stage #2 • Convolution (3 x 3 kernel, 32 filters) • Activation (ReLU) • Max Pooling (2x2) • Convolutional Stage #3 • Convolution (3 x 3 kernel, 64 filters) • Activation (ReLU) • Max Pooling (2x2) • Flatten • Dense (64 nodes) • Activation (ReLU) • Dropout (0.5) • Dense (1 node) • Activation (ReLU) Link to "Create Convolutional Neural Network" section of notebook # Analyzing Network Architecture and Tensor Shapes TL;DR: Each step of the neural network transforms an input tensor of a given shape into an output tensor of a (potentially different) shape. In this section of the notebook, we step through each of the neural network's layers to explain how the size of each layer's inputs and outputs are determined. Link to "Network Architecture/Shapes" section of notebook ## Input Image Layer TL;DR: The size of the cat and dog images is 150 x 150 pixels. Each image is a color image, so it consists of 3 channels. Therefore, the input to the very first layer has a shape of $$(\mbox{None}, w_0, h_0, c_0) = (\mbox{None}, 150, 150, 3)$$ (where "None" indicates a variable-size dimension that is equal to the number of total input images, or alternatively, the number of images per batch, if we are using batch learning). Link to "Input Image Layer" section of notebook ## First Convolution Layer TL;DR: A convolutional layer with a kernel size of $$k_1 \times k_1$$ and a number of filters $$c_1$$ will transform the shape of the input image to: $$(\mbox{None}, w_1, h_1, c_1) = (\mbox{None}, 148, 148, 32)$$ where $$w_1 = w_0 - k_1 + 1 \\ h_1 = h_0 - k_1 + 1$$ Importantly, each of the three input channels are added together to determine their contribution to the final convolution filters - the number of input channels does not affect the number of output channels. The total number of output channels is equal to the number of filters in the convolution layer. Link to "First Convolutional Layer" section of notebook ## First Activation Layer TL;DR: The activation layer is a straightforward one-to-one mapping - each individual value from the output of the convolution layer is fed through the rectified linear unit (ReLU) function and the resulting output value becomes the input to the next layer. The ReLU function is given by: $$\mbox{ReLU}(x) = \max(0,x)$$ The activation layer does not change the shape of the input tensor. Link to "First Activation Layer" section of notebook ## First MaxPooling Layer TL;DR: The max pooling layer is a way of making the final convolutional filters (the "feature-detectors" of the convolutional neural network) less sensitive to the exact placement of features. The pooling layer only affects the size of the filter, not the number of channels. If we use a max pooling window of $$p_1 \times p_1$$, we will reduce the image size by $$\mbox{ceil}(w_1/p_1)$$ and $$\mbox{ceil}(h_1/p_1)$$. This reduces the input tensor shape to: $$(\mbox{None}, \mbox{ceil}(w_1/p_1), \mbox{ceil}(h_1/p_1), c_1) = (\mbox{None}, 74, 74, 32)$$ Link to "First Max Pooling Layer" section of notebook ## Second Convolution Layer TL;DR: The second convolutional layer has a kernel size of $$k_2 \times k_2$$ and a number of filters $$c_2$$, which will transform the shape of the input image in the same way as described for the first convolutional layer. Note that just as the number of channels (3) in each input to the first convolutional layer did not affect the final number of channels in the output of the convolutional layer (number of channels was fixed by specifying number of output filters for the convolutional layer), so the number of input channels to the second convolutional layer does not affect the number of output channels from the second convolutional layer. The final shape coming out of the second convolutional layer is: $$(\mbox{None}, w_2, h_2, c_2) = (\mbox{None}, 72, 72, 32)$$ where $$w_2 = w_1 - k_2 + 1 \\ h_2 = h_1 - k_2 + 1 \\$$ Link to "Second Convolutional Layer" section of notebook ## Second Activation Layer TL;DR: The activation layer again uses a function to map input values to output values in a one-to-one mapping, so the activation layer does not change the shape of the input tensor. Link to "Second Activation Layer" section of notebook ## Second MaxPooling Layer TL;DR: The second max pooling layer uses a pooling window of size $$p_2 \times p_2$$. This will reduce the input size to $$\mbox{ceil}(w_2/p_2) \times \mbox{ceil}(h_2/p_2)$$. This reduces the input tensor shape to: $$(\mbox{None}, \mbox{ceil}(w_2/p), \mbox{ceil}(h_2/p), c_2) = (\mbox{None}, 36, 36, 32)$$ Link to "Second Max Pooling Layer" section of notebook ## Third Convolution Layer TL;DR: The third convolution layer with a kernel size of $$k_3 \times k_3$$ and $$c_3$$ output filters will transform the input tensor shape in the following way (note that the third convolutional layer has 64 filters, not 32): $$(\mbox{None}, w_3, h_3, c_3) = (\mbox{None}, 34, 34, 64)$$ where $$w_3 = w_2 - k_3 + 1 \\ h_3 = h_2 - k_3 + 1$$ Link to "Third Convolutional Layer" section of notebook ## Third Activation Layer TL;DR: The activation layer again uses a function to map input values to output values in a one-to-one mapping, so the activation layer does not change the shape of the input tensor. Link to "Third Activation Layer" section of notebook ## Third MaxPooling Layer TL;DR: The thid max pooling layer uses a pooling window of size $$p_3 \times p_3$$. This will reduce the input size to $$\mbox{ceil}(w_3/p_3) \times \mbox{ceil}(h_3/p_3)$$. This reduces the input tensor shape to: $$(\mbox{None}, \mbox{ceil}(w_3/p_3), \mbox{ceil}(h_3/p_3), c_2) = (\mbox{None}, 17, 17, 64)$$ Link to "Third Max Pooling Layer" section of notebook ## Flatten and Dense Layers TL;DR: The flatten layer converts a tensor of dimension $$(\mbox{None}, 17, 17, 64)$$ into a 1D vector of $$17 \times 17 \times 64 = 18,496$$ neural network nodes. This does not change any of the values, it simply reshapes the input tensor. The first dense layer reduces the flattened $$18,496$$ nodes to $$64$$ nodes, using a fully connected layer of nodes. These values are then passed through an activation function (as with the above activation layers, this is a one-to-one mapping and does not change the shape of the input tensor). The dense layer is followed by a dropout layer to help prevent overfitting; this pattern is common in convolutional neural networks. The second dense layer further reduces the $$64$$ nodes to a single node, whose output will determine whether the input image is a cat or a dog. Link to "Flatten Layer" section of notebook Link to "Dense (64) Layers" section of notebook Link to "Dense (1) Layers" section of notebook ## Categorical Output TL;DR: Normally when classifying cats and dogs, we would have two output neurons, one to output a binary yes/no to answer "is this a cat?" and another output a binary yes/no to answer "is this a dog?". However, in this example, we assume that all inputs contain either only cats or only dogs, so the single-output binary classifier is determining whether an image is a dog (0) or a cat (1). # Image Transformer TL;DR: The ImageDataGenerator class is a class provided by keras for loading image data from a directory and (optionally) applying various transformations to the images in order to generate additional training data from a set of images. For example, the following code block from the notebook creates an ImageDataGenerator class that will load images from a folder on disk, and applies various transformations (shearing, zooming, and horizontally flipping) to each image during the training process. train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) This can then be used to generate test image data: train_generator = train_datagen.flow_from_directory( 'train', target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') This will look for images in the relative path train/data/ (note the implicit data/ directory tacked on the end). Note that this image data generator allows us to use images that do not have size $$150 \times 150$$, as they will be re-sized to target_size. Link to "Image Transformer" section of notebook ## Next Steps Now that we have walked through a sample convolutional neural network and covered how each layer transforms the size of the input tensor, we are ready to start applying convolutional neural networks to real problems. Our next blog post will cover the materials in the charlesreid1/deep-learning-genomics repository on Github, which applies the convolutional neural network concept in a 1D context (applying convolutions to 1D sequences, instead of 2D images) to learn about (and predict) DNA transcription factor binding sites. # Graphs for Bioinformatics, Part 1: de Bruijn Graphs, Hamiltonian Paths, and Eulerian Paths Posted in Computational Biology # The Context: Rosalind.info To provide a bit of context for a discussion of Euler paths and Euler cycles: starting around December, a group of us in the Lab for Data Intensive Biology (DIB Lab) started working through the textbook Bioinformatics Algorithms: An Active Learning Approach and the associated website, Rosalind.info. Rosalind.info is a site that is similar in style to Project Euler, a familiar topic on this blog. Project Euler poses computationally challenging problems in the domain of mathematics. Like Project Euler, the visitor is given one small example input and the corresponding correct output, and one large example input and corresponding output. Also like Project Euler, the problems vary in how much computer science versus domain expertise is needed, but they are largely focused on writing algorithms rather than on the science behind the computations. Unlike Project Euler, however, Rosalind.info does give plenty of hints (via the textbook, if you have a copy), and sometimes even gives pseudocode for the algorithm. The book is required to get enough context to answer some of the Rosalind.info problems. # Graphs for Bioinformatics The textbook focuses on different problems in each chapter. For example, Chapter 1 uses the example of a string of DNA that marks where replication begins to introduce some basic bioinformatics concepts and algorithms. Chapter 2 uses the concept of molecular clocks to introduce motifs and motif-finding, the focus of most of the problems in Chapter 2. Chapter 3 focuses on the problem of genome assembly - how we assemble an entire genome from short segments alone. In particular, the chapter focuses on de Bruijn graphs, which are graphs that, given a sequence of symbols drawn from an alphabet, are composed of edges (one for each k-mer, that is, a chunk of the sequence of length k), and vertices (one for each k-mer prefix and k-mer suffix, connected by a directed edge of the k-mer). We will cover more of the details of these graphs shortly. ## Building a K-mer Graph (The Wrong Graph) The Bioinformatics Algorithm book starts with a general discussion of how to represent a sequence of DNA nucleotides using a graph. The idea they discuss initially (which is an obvious, but not necessarily good, one) is splitting the sequence into k-mer chunks, like so: Sequence: TAATGCCATGGGATGTT Pieces: TAA AAT ATG TGC GCC CCA CAT ATG TGG GGG GGA GAT ATG TGT GTT and letting one k-mer be represented by one vertex. Then the sequence above could be turned into the graph: TAA -> AAT -> ATG -> TGC -> GCC -> CCA -> CAT -> ATG -> TGG -> GGG -> GGA -> GAT -> ATG -> TGT -> GTT On this graph, every edge has the property that the first (k-1) nucleotides of the destination match the last (k-1) nucleotides of the source. If we did not know this sequence in advance, we could draw every edge with that property - every time the last (k-1) characters of a k-mer match the first (k-1) characters of another k-mer, an edge is drawn between those two vertices. That graph would result in many more edges than the graph shown above. Furthermore, in theory, if each read sequence came from a single genome and we had the entire genome covered by read sequences, a path through the graph that visits every vertex (every k-mer) would yield the full genome. A path through a graph that visits every vertex once is called a Hamiltonian path. Why is this hard? Because the problem of proving a Hamiltonian path exists, let alone finding it, becomes very difficult for large graphs. ## Building a De Bruijn Graph (The Right Graph) Nicolaas de Bruijn introduced (in 1946, in a paper entitled simply "A combinatorial problem") a new way of representing a sequence with a graph. He split a given sequence into k-mers, as before, but instead of representing each k-mer as a vertex on the graph, he represented each k-mer as an edge on the graph. This type of graph is called a de Bruijn graph. Specifically, for a DNA sequence, each k-mer from the sequence is represented by an edge, where the source vertex is that k-mer's (k-1)-nucleotide suffix and the destination vertex is that k-mer's (k-1)-nucleotide prefix. Sequence: TAATGCCATGGGATGTT Pieces: TA AA AT TG GC CC CA AT TG GG GG GA AT TG GT TT Now this sequence is written as the graph: TA -> AA -> AT -> TG -> GC -> CC -> CA -> AT -> TG -> GG -> GG -> GA -> AT -> TG -> GT -> TT so that the original breakup of the sequence into k-mers is still represented, but now as edges rather than as vertices. That is, the k-mer TAA is represented by the edge TA -> AA. ## Transform the Problem: Hamiltonian Paths to Eulerian Paths The change in the problem representation (k-mers as vertices to k-mers as edges) changes the problem of finding the Hamiltonian path (a path through the graph that visits every vertex exactly once) into the problem of finding the Eulerian path (a path through the graph that visits every edge exactly once). # An Example Let's look at a slightly simpler example - the one de Bruijn was originally considering - so we can see de Bruijn graphs in action in a slightly simpler case. In his 1946 paper "A combinatorial problem", de Bruijn describes the problem thus: Some years ago Ir. K. Posthumus stated an interesting conjecture concerning certain cycles of digits 0 or 1, which we shall call $$P_n$$ cycles. For $$n = 1, 2, 3, \dots$$, a $$P_n$$ cycle be an ordered cycle of $$2^n$$ digits 0 or 1, such that the $$2^n$$ possible ordered sets of $$n$$ consecutive digits of that cycle are all different. As a consequence, any ordered set of $$n$$ digits 0 or 1 occurs exactly once in that cycle. For example, a $$P_3$$ cycle is $$00010111$$, respectively showing the triples 000, 001, 010, 011, 111, 100, 100, which are all the possible triples indeed. In this case, de Bruijn is discussing complete de Bruijn graphs - he constructs a de Bruijn graph of all possible 3-mers (our k-mers, $$k = 3$$), and constructs a path through the graph that visits every edge of the graph. Here is the sequence broken down as above: Sequence: 00010111 Pieces: 00 00 01 10 01 11 11 The alphabet here is binary: 0 and 1. This (seemingly simple) example is a bit confusing, but here's what's going on: we have four vertices on the de Bruijn graph, consisting of the 2-mers: 00 01 10 11 Now, if we draw an edge for every possible 3-mer, we would start with the 3-mer 000, which is actually represented by a self-edge from vertex 00 to vertex 00, because the prefix matches the suffix. Similarly, the 3-mer 111 is represented by a self-edge from vertex 11 to vertex 11. The other 3-mers are represented by their corresponding edges: 001 is represented by the edge 00 -> 01, 010 by the edge 01 -> 10, etc. By drawing every possible edge (to represent every possible 3-mer), we assemble the complete de Bruijn graph (that is, the de Bruijn graph containing vertices for all possible 2-mers connected by edges representing every possible 3-mer in the given alphabet). The sequence de Bruijn gives in his paper is an Euler path through the complete (de Bruijn) graph (that is, a path through the de Bruijn graph that visits every edge exactly once): Sequence: 00010111 00 -> 00 -> 01 -> 10 -> 01 -> 11 -> 11 # Back to DNA Now the utility of the de Bruijn methodology is more clear: if we can come up with fast, efficient algorithms to find Euler paths on large graphs, we can transform the assembly problem (given fragments of a long sequence, reconstruct the sequence) into the problem of finding an Eulerian path, which is tractable even for large graphs. Compare this with string matching algorithms utilizing dynamic programming, which can cost $$O(N^2)$$ and make genome assembly computationally infeasible. Part 1 (this post) has covered the basic idea behind assembling DNA sequences using de Bruijn graphs. In Part 2 of this post, we will move on to a discussion of the "real world" problem, warts and all, and how we can relax some of the strict assumptions (like assuming perfect coverage of the original sequence and assuming that all reads are perfect). # The Git-Commit-Ectomy Posted in Git TLDR: Visit the git-commit-ectomy guide: http://pages.charlesreid1.com/git-commit-ectomy Consider the following completely hypothetical scenario. Suppose you've been working for a while on your latest invention, a brand-new whiz-bang command line tool that's fast and solves an important problem and you're chugging your way to the finish line. As part of preparing to release your software tool, you add some tests, because that's what you do. Those tests require some data, so you add a few test data sets, a few hundred kilobytes each, nothing fancy. Then one day, the intern (who is just trying to be helpful by adding a new test) slips in a 70 MB test data set, and slips it in with a string of commits that somehow get incorporated into the master branch. (Side note: you turned on branch protection to prevent this whole mess, didn't you? Didn't you?? 'Course you did. This is all just a hypothetical scenario.) Now, the situation is complicated: there are several open pull requests and active branches, and a non-trivial amount of history that's been added since the time the large test data set was accidentally added. The intern apologizes profusely and promises to bring in donuts every day next week. But the damage is done. The intern, a git novice, pulls out a laptop and runs a git rm on the files, pushing to the remote and happily, ignorantly believing the problem has been solved. But the intern does not understand how git works. It has a perfect memory, and remembers every file in every commit. Since the problematic first commit that added the large files, git has remembered and will always remember that large file. It's in git's blood. It's what git was designed to do. Once the intern has been, ahem, moved along, and branch protection has been turned on, it's time to find a git surgeon to perform a git-commit-ectomy to remove the problematic large files from the repository entirely. ## Dr. Reid's Patented Git-Commit-Ectomy If it's a git-commit-ectomy you need, try Dr. Reid's Patented Git-Commit-Ectomy to ease your git commit pains. Whether you want to keep thing simple and remove a git commit from a single branch, or if you've got multiple branches, Dr. Reid's Patented Git-Commit-Ectomy will get you back on your feet. Dr. Reid's Patented Git-Commit-Ectomy can handle even the most messy, confused, and tangled git commit history - with a bit of work and a gifted surgeon the git-commit-ectomy can smooth things out and get you feeling right as rain. Visit the git-commit-ectomy guide: http://pages.charlesreid1.com/git-commit-ectomy
{}
# Polynomials for which roots can be expressed as polynomials in a single root Classical Galois theory gives necessary and sufficient conditions for the roots of a polynomial in $$k[x]$$ to be expressible in terms of nested radicals of the coefficients. Suppose instead that a single root $$\alpha$$ of $$p(x)\in \mathbb{Q}[x]$$ is known. Are there known necessary and sufficient conditions on $$p(x)$$ such that all remaining roots can be expressed as polynomial (or rational) functions of $$\alpha$$ and the coefficients of $$p(x)$$? For example, the cyclotomic polynomials have this property, since every primitive $$n^{\textrm{th}}$$ root of unity can be written as a power of some fixed root. • This is only possible when $\alpha$ generates the splitting field, which means that the Galois group $G=\operatorname{Gal}(p)$ has order equal to $d=\deg(p)$. Furthermore, since the action of $G$ on the roots of $p$ is also transitive, it must be the cyclic group of order $d$. Conversely, if $G$ is (cyclic) of order $d$, then every root of $p$ is expressible as a polynomial in $\alpha$. – RP_ Mar 29 at 9:04 • The coefficients of $p$ are in $\mathbb{Q}$, so I guess you just want the roots to be polynomial functions in $\alpha$ with coefficients in $\mathbb{Q}$? Otherwise, can you clarify? Mar 29 at 9:08 • @RP_ The Galois group is not necessarily cyclic, if $K/\mathbb{Q}$ is any finite Galois extension then any $\alpha$ generating $K/\mathbb{Q}$ will work. We can also choose $\alpha$ so that its conjugates form a $\mathbb{Q}$-basis of $K$. Mar 29 at 9:14 • @FrançoisBrunault To be clear, if $p(x) = a_{n}x^{2} + \ldots + a_{0}$ then I'd like an expression for the other roots as polynomial functions in the $a_{i}$ and $\alpha$ with coefficients in the base field. Of course, for a fixed $p(x)$ the other roots would just be polynomial in $\alpha$ with coefficients in $\mathbb{Q}$. Mar 29 at 9:20 • Just a remark. This is related to how Galois himself did Galois theory from the point of view of the theory of symmetric function: looking at symmetric functions of all roots except one. See Theorem 6 in the paper "The fundamental theorem on symmetric polynomials: History's first whiff of Galois theory" by Blum-Smith and Coskey tandfonline.com/doi/abs/10.4169/college.math.j.48.1.18 Mar 29 at 18:30 Let $$\alpha=\alpha_1$$, $$\alpha_2$$, ..., $$\alpha_n$$ be the roots of $$p(x)$$. You want $$\mathbb{Q}(\alpha_1,\alpha_2, \ldots, \alpha_n) = \mathbb{Q}(\alpha)$$. If the Galois group is $$G \subseteq S_n$$, then $$\mathbb{Q}(\alpha_1)$$ corresponds to the stabilizer of $$1$$ in $$G$$, and $$\mathbb{Q}(\alpha_1,\alpha_2, \ldots, \alpha_n)$$ corresponds to the trivial subgroup. So the condition is that the stabilizer of $$1$$ in $$G$$ is trivial. In other words, the action of $$G$$ on the orbit of $$\alpha_1$$ should be regular. All of this was basically said in comments above, but their seemed to be some confusion about the case where $$p(x)$$ has multiple factors, so here is an answer which doesn't assume that $$p$$ is irreducible. As per discussion in comments, let $$L/K$$ be a Galois extension with Galois group $$G$$; put $$N = |G|$$. Let $$\alpha \in L$$ be an element with trivial stabilizer. Let $$\beta$$ be an other element of $$L$$. We want to write $$\beta$$ as a polynomial in $$K(\alpha)$$. Set $$\gamma_j = \text{Tr}_{L/K}(\alpha^j \beta)$$. Then the $$\gamma_j$$ are in $$K$$. If $$K = \mathbb{Q}$$ and $$\alpha$$ and $$\beta$$ are algebraic integers, then the $$\gamma_j$$ are integers. For any nonnegative integer $$j$$, we have $$\text{Tr}_{L/K}(\alpha^j \beta) = \sum_{\sigma \in G} \sigma(\alpha)^j \sigma(\beta).$$ If, for some magic reason, we explicitly have floating point values for the $$\sigma(\alpha)$$ and $$\sigma(\beta)$$, and know the $$G$$-action on these values, we can use this formulato numerically compute the $$\gamma_j$$; if the $$\gamma_j$$ are then integers, we can round our computations to the nearest integer and get the result. In practice, I'm not sure how you'd get the $$\gamma_j$$, but I'll pretend you know them. Let $$A$$ be the $$N \times N$$ matrix with entries $$\sigma(\alpha)^j$$ for $$0 \leq j \leq N-1$$. Let $$\vec{b}$$ be the vector with entries $$\sigma(\beta)$$ and let $$\vec{c}$$ be the vector with entries $$\gamma_j$$. So the displayed equation above states that $$A \vec{b} = \vec{c}$$, and thus $$\vec{b} = A^{-1} \vec{c}$$. In particular, $$\beta$$ is the dot product of the first row of $$A^{-1}$$ with $$\vec{c}$$. The entries of $$\vec{c}$$ are in $$K$$, so it remains to show that the entries of the first row of $$A^{-1}$$ are in $$K(\alpha)$$. Let the Galois orbit of $$\alpha$$ be $$\{ \alpha_1, \alpha_2, \ldots, \alpha_N \}$$ with $$\alpha = \alpha_1$$. Then $$A$$ is a Vandermonde matrix in the $$\alpha_i$$'s, so the first row of its inverse is $$\pm \frac{e_i(\alpha_2, \alpha_3, \ldots, \alpha_n)}{\prod_{j=2}^N (\alpha_1 - \alpha_j)}. \qquad (\ast)$$ Let $$p(x)$$ be the polynomial $$f(x)/(x-\alpha_1) = \prod_{j=2}^N (x-\alpha_j)$$. Then the coefficients of $$p$$ are clearly in $$K(\alpha_1)$$. The numerator $$e_i(\alpha_2, \alpha_3, \ldots, \alpha_n)$$ of $$(\ast)$$ is (up to sign) the coefficient of $$x^{n-i-1}$$ in $$p$$, and the denominator is $$p(\alpha_1) = f'(\alpha_1)$$. So $$(\ast)$$ is in $$K(\alpha_1)$$ and we are done. My memory is that I read that this was Galois's proof, but I couldn't find the source quickly. • Yes! It is now clear to me that this is necessary and sufficient. Is it possible to describe the other roots explicitly as polynomials in $\alpha_{1}$ under these assumptions? (Which of course is what I should have asked in the first place.) Mar 29 at 16:13 • So, at this point, I want to know in what form the data is given to me. If you give me a polynomial $p(x)$ of degree $N$, a subgroup $G$ of $S_N$ acting freely and transitively on the roots of $p$, another polynomial $q(x)$ and an action of $G$ on the roots of $q$, and you promise me that these actions extend to field automorphisms, I do know a way to get polynomial expressions for the roots of $q$ in terms of a root of $p$ from this data. But that is a pretty weird thing to promise me and not something you naturally get computationally. Mar 29 at 17:39 • I agree this is a weird set of demands, but in fact I think this is pretty close to my situation, actually. In any case, I'd like to see how it is done! Mar 29 at 17:49 • Regarding the history, I looked at the article "Galois for 21st century readers" by Edwards. He explains how Galois proved that (in modern language) each root of $p(x)$ can be expressed rationally in terms of a generator of the splitting field (Lemma 3 in his first memoir). The construction looks a bit different than the one from your answer, but apparently, Galois also uses the important fact that every symmetric function of the other roots $\alpha', \alpha'',\ldots$ of $p(x)$ can be expressed rationally in terms of $\alpha$. Mar 30 at 7:01 From a computational point of view, one should not try to compute the Galois group. Assuming $$p(x) \in \mathbb{Q}[x]$$ is irreducible, and $$\alpha$$ is a root of $$p(x)$$, it is sufficient to factor $$p(x)$$ over the number field $$\mathbb{Q}(\alpha) = \mathbb{Q}[x]/(p(x))$$, and look whether all the irreducible factors have degree 1. In this way, you also get the expression of the roots in terms of $$\alpha$$. This is much less expensive than computing the Galois group, which is feasible only in relatively low degree. • I think this answer, and not mine, is the practical answer. Mar 29 at 19:41 • Yes! David's answer is wonderful, and I feel I learned something useful from it, but this is the way to get what I want computationally. Thank you both! Mar 29 at 20:57
{}
AB ☆ India, 2012-05-04 17:13 (3984 d 14:33 ago) Posting: # 8514 Views: 14,530 ## SAS error in 3 way ref replicate study [RSABE / ABEL] Dear all, we have done a BE study in 3 way ref replicate design for FDA and ended up with the following error while running SAS code for unscaled 90% bioequivalence confidence intervals as given in "Draft Guidance on Progesterone" for parameter AUCI. "NOTE: 20 observations are not included because of missing values. WARNING: Did not converge." These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%. However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error. plz suggest what is the correct way to deal with this? Regards, AB jag009 ★★★ NJ, 2012-05-04 17:36 (3984 d 14:10 ago) @ AB Posting: # 8515 Views: 13,016 ## SAS error in 3 way ref replicate study Hi AB, Sample size n? John AB ☆ India, 2012-05-05 08:45 (3983 d 23:01 ago) @ jag009 Posting: # 8516 Views: 12,937 ## SAS error in 3 way ref replicate study HI Jag009, ❝ Sample size n? sample size is 60 in the study. Regards, AB d_labes ★★★ Berlin, Germany, 2012-05-06 12:10 (3982 d 19:36 ago) @ AB Posting: # 8518 Views: 13,273 ## FDA's ABE code and partial replicate design Dear AB, ❝ "NOTE: 20 observations are not included because of missing values. ❝ WARNING: Did not converge." ❝ These missing 20 observations are the ones (AUCI) which were not caculated in the respective treatment groups as the r2 values are less than 80%. ❝ However, when we re run the data including these 20 missing AUCI (recalculated without considering the r2 criteria), there was no error. Point 1: The FDA ABE code for replicate crossover studies using SAS Proc MIXED fits a model which is overspecified for data coming from a partial replicate design. The intra-subject variability of Test formulation is confounded with the subject-by-formulation interaction within that design and not identifiable alone. This may lead to convergence problems in the REML method (see this thread) or may lead to unreliable values of the intra-subject variances, especially for the Test formulation (see this thread). Your missing values seems to exacerbate the problem. But are not the source IMHO . Ways out? Don't know exactly. The logical way within the code - method - given in the progesterone guidance would be not to use the Proc Mixed code but the estimate plus its 90% confidence interval obtained from the evaluation of the intra-subject contrasts of T vs. R (step Intermediate analysis - ilat) as measure of the ABE. I'm always wondering why the Proc MIXED code was recommended if the evaluation via intra-subject contrasts did indicate that scaled ABE was not applicable. On the other hand, if applicable, the ABE criterion evaluated via intra-subject contrasts is part of the scaled ABE criterion. That's illogical to me. Another possibility is to reduce the model in the FDA Proc MIXED code. Neglecting the subject-by-formulation interaction (setting it to zero) would let to a somewhat better behaving model. You could achieve this by setting the covariance structure within   RANDOM TRT/TYPE=FA0(2) SUB=SUBJ G; to CS instead of FA0(2) or CSH. Point 2: Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO . Regardless of which study design used. Use and you will find some (partly lengthy) discussions here in the Forum about that subject. See here and here for instance. Regards, Detlew Helmut ★★★ Vienna, Austria, 2012-05-06 15:29 (3982 d 16:17 ago) @ d_labes Posting: # 8519 Views: 13,361 ## Arbitrary (and unjustified) cut-off of r² Dear Detlew & AB! ❝ Not to calculate the AUC(0-inf) values if the fit of the terminal part of the concentration-time curves had an r2 value less than 80% is at least statistically not very sound, not to say nonsense IMHO . Yes! Explained variance (sloppy: information) in regression (strongly!) depends on the sample size. See here and a rather lengthy thread of 2002 at David Bourne’s PKPD-list. Critical values of $$\small{r}$$ according to Odeh (1982)1 (and modified for $$\small{r^2}$$); one sided, 5%:$$\small{\begin{array}{rcc} n & r & r^2\\\hline 3 & 0.9877 & 0.9755\\ 4 & 0.9000 & 0.8100\\ 5 & 0.805\phantom{0} & 0.6486\\ 6 & 0.729\phantom{0} & 0.5319\\ 7 & 0.669\phantom{0} & 0.4481\\ 8 & 0.621\phantom{0} & 0.3863\\ 9 & 0.582\phantom{0} & 0.3390\\ 10 & 0.549\phantom{0} & 0.3018\\ 11 & 0.521\phantom{0} & 0.2719\\ 12 & 0.497\phantom{0} & 0.2473\\ 13 & 0.476\phantom{0} & 0.2267\\ 14 & 0.457\phantom{0} & 0.2093\\ 15 & 0.441\phantom{0} & 0.1944\\\hline \end{array}}$$ In other words, an $$\small{r^2}$$ of 0.6486 from five data points denotes the same ‘quality of fit’ than an $$\small{r^2}$$ of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil. Maybe there is some copypasting going on? If you really want to use a cut-off (which I don’t recommend and is not required in any GL) take the number of data points into account. I strongly suggest to revise your SOP. BTW, visual inspection of fits is mandatory (see there with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2 All data sets: $$\small{\bar{x}=9.0,\,s_x^2=11,\,\bar{y}=7.5,\,s_x^2=4.1\rightarrow\widehat{y}=3+0.5\cdot x,\,R_{yx}^2=0.82\ldots}$$ 1. Odeh RE. Critical values of the sample product-moment correlation coefficient in the bivariate distribution. Commun Statist–Simula Computa. 1982; 11(1): 1–26. doi:10.1080/03610918208812243. 2. Anscombe FJ. Graphs in statistical analysis. Am Stat. 1973; 27: 17–21. doi:10.2307/2682899. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes d_labes ★★★ Berlin, Germany, 2012-05-07 10:48 (3981 d 20:58 ago) @ Helmut Posting: # 8520 Views: 12,887 ## Anscombe quartet Dear Helmut! ❝ BTW, visual inspection of fits is mandatory (see here with references). Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2 Very nice illustration . Thanx for that I was not aware of up to now also it was invented already in 1973 (if Wikipedia is correct). Regards, Detlew Helmut ★★★ Vienna, Austria, 2012-05-07 13:15 (3981 d 18:31 ago) @ d_labes Posting: # 8522 Views: 13,092 ## Anscombe quartet in R Dear Detlew! ❝ Very nice illustration . I hijacked the code from Wikimedia Commons and learned that the data are available in R’s standard installation. Give it a try: ?anscombe Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes AB ☆ India, 2012-05-07 14:07 (3981 d 17:39 ago) @ Helmut Posting: # 8523 Views: 12,940 ## Anscombe quartet in R Dear Detlew & HS, Many thanks for your detailed insight. Regards, AB FI ☆ Austria, 2012-10-08 12:39 (3827 d 19:07 ago) @ Helmut Posting: # 9332 Views: 12,582 ## Arbitrary (and unjustified) cut-off of r² Dear Helmut, ❝ In other words, an $$\small{r^2}$$ of 0.6486 from five data points denotes the same ‘quality of fit’ than an $$\small{r^2}$$ of 0.9755 from three. Searching the forum I get the impression that you (AB) are not alone with a cut-off of 0.80. Justification: nil. ❝ BTW, visual inspection of fits is mandatory... Don’t trust in numbers alone. A classical example is Anscombe’s quartet.2 Small add-on: adj. r² method could be misleading in "the more (datapoints) the better"! Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases for Azi (uptake into white blood cells, rapid distribution into tissue... would resemble Anscombe2), and a very long t1/2, depending (!) on the timepoints used for calculation. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12 (!), should the timepoints be limited (to 4 to 3), to reflect PK? What if one concentration looks to be an analytical mistake (?), that confounds t1/2 in such a way that the slope increases...? Where to put the cut-off for adj r²? FI Helmut ★★★ Vienna, Austria, 2012-10-08 15:45 (3827 d 16:01 ago) @ FI Posting: # 9334 Views: 12,553 ## Predominant half life; exclusions Servus Franz! ❝ Considering the PK of Azithromycin, "the less the better" could be considered, because there are (at least?) 3 elimination phases […]. As terminal elimination needs to be calculated and the adj r² method from previous study took mostly 3 to 5 points, but sometimes also 12 (!), should the timepoints be limited (to 4 to 3), to reflect PK? Multiphasic PK can be problematic, especially if volumes of distribution are variable. I once had to deal with a drug (3 phases) where the terminal half life was ~3 days and the volume of distribution of the deep compartment was very large. Was this phase important? No. Running a PopPK model it turned out that this compartment accounted for <1% of the AUC. In my case the V2 showed little variability, but if we have large variability the predominant half life in some subjects might be the second phase and the third in others… See also Boxenbaum & Battle (1995).* ❝ What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...? Since according to the EMA’s bioanalytical GL a blind plausibility review of data leading to confirmation/rejection (aka “pharmacokinetic repeat”) is not acceptable any more – bad luck. If other countries are concerned: Have an SOP in place, repeat the analysis, and cross fingers. Other options (?): • Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs). • Exclude to subject from the comparison of AUC, but keep him/her for the comparison of Cmax. Since the latter is more variable and the study should be powered to show BE for the most variable metric it should not hurt too much. • If you expect problems beforehand – and have an IR formulation – go with AUC72 instead. No more hassle with extrapolations. Of course everything should be covered by SOPs and – preferably – described in the protocol as well. ❝ Where to put the cut-off for adj r²? Nowhere. Forget it. Doesn’t make any sense, IMHO. For a bad example see this thread. I would be happy to see a publication justifying an algorithm which would allow automatic selection of the terminal phase in multicompartment PK. If anybody knows a single one, please let me know. See also Ref.#2 at the end of this thread. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes d_labes ★★★ Berlin, Germany, 2012-10-09 11:33 (3826 d 20:13 ago) @ Helmut Posting: # 9352 Views: 12,359 ## Excluding time points for lambdaZ Dear Helmut, dear FI! ❝ ❝ What if one concentration looks to be an analytical mistake(?), that confounds t1/2 in such a way that the slope increases...? ❝ ... ❝ Other options (?): • Exclude the suspect value from the estimation of λz. Keep it in the calculation of AUCt and base the calculation of AUC not on Ct, but on its estimate (in Phoenix/WinNonlin’s terminology: AUCinfpred instead of AUCinfobs). • ... In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points". Say never never. Quite recently I got: … The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 … (It was a replicate BE study in patients). The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above. Seems some regulators opinion do not match mine. Regards, Detlew Helmut ★★★ Vienna, Austria, 2012-10-09 16:09 (3826 d 15:37 ago) @ d_labes Posting: # 9356 Views: 12,591 ## Analytical variability Dear Detlew! In this post I had claimed "I never have seen deficiency questions concerning the fit of the terminal phase of concentration time courses in my ~30 years career. Even if the 'fit' was done with only 2 points". I remember this post very well. So far I received only one request myself (by the sponsor, not an agency) why I have selected specific time points and not others. I answered by “visual inspection of the fit” (aka eye-ball PK) as recommended in the literature. BTW, I don’t know a single reference suggesting maximum R²adj or a statement about exclusion. Say never never. ❝ Quite recently I got: … The applicant should justify the calculation of the terminal rate constant for patient #xxx, reference/r.1, patient #yyy, test/r.2 … […]. ❝ The questioned cases had in common that the last measured concentration was increasing compared to the preceding ones and doesn't fit into the linear part for the log-linear regression. To not grossly overestimate the terminal half-life in such situations it is my standard operation to act according to Helmut's first option above. Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. As a finger exercise we once solved the confidence bands of weighted inverse regression (aka calibration) – which required some nasty algebra (partial derivatives, etc.).* Then you can come up not only with estimated concentrations but also their confidence intervals (asymmetric – since the CI of a linear function are two hyperbolas). If the CIs of two concentrations overlap, they are not significantly different… • If one doesn’t have the knowledge and stamina to step into algebra here is an alternative: Calculate the CI of the regression (available in all statistical packages). To get the lower CL of the estimated concentration (x0|y0) use the bisection algo with starting values of xlow=0 and xhi=x0 and the estimated upper CL at these values. To get the lower CL run the algo within x0 and the ULOQ. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes d_labes ★★★ Berlin, Germany, 2012-10-09 17:15 (3826 d 14:31 ago) @ Helmut Posting: # 9357 Views: 12,450 Dear Helmut! ❝ Oh no! What will you answer? Maybe this post helps. Seems that (some) regulators are not aware about the consequences of (acceptable!) limitations of analytical methods (20% inaccuracy, 20% imprecision at the LLOQ) and take results as set in stone. ... My answer is somefink like (may be it helps others): The concentration time points selected for calculation of the terminal rate constant (lambdaZ) are chosen by visual inspection as recommended in the literature. The last concentration of the questioned terminal rate constant calculations does not fit the linear part of the concentration time curves (in log-linear plot). This behaviour is sometimes common if the concentrations are in the range of the LLOQ. To not grossly overestimate the terminal half-life in such cases it is recommended to leave out such measurement points. See for instance the book Hauschke, Steinijans, Pigeot "Bioequivalence Studies in Drug Development" Wiley, Chichester (2007) Chapter 2 “Metrics to characterize concentration-time profiles in single- and multiple-dose bioequivalence studies” especially Fig. 2.3 On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)). AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant. Regards, Detlew Helmut ★★★ Vienna, Austria, 2012-10-09 21:43 (3826 d 10:03 ago) @ d_labes Posting: # 9362 Views: 12,230 ## Well done! Dear Detlew, good anwer! ❝ On the other hand the reliable estimation of the terminal rate constant which is needed for a reliable estimate of AUC(0-inf) has the one and only purpose to assure that the observation time is long enough for ensuring that AUC(0-tlast) covers at least 80% of the AUC(0-inf) (or the other way round that the residual area AUC(tlast-inf) is <=20% of AUC(0-inf)). ❝ AUC(0-inf) in itself is not a primary endpoint on which the bioequivalence decision will be based (not according to the EMA guidance and also not according to the Study protocol). The bioequivalence decision taken in the Study Report is thus in no way affected by the way of calculation of the terminal rate constant. Agree, but some hard-core assessor might tell you that if you cannot reliably estimate AUC you have not demonstrated that AUCt is an acceptable metric for the BE assessment. That’s a vicious circle. BTW, I didn’t want to go to much off-topic (as I did above) and started another thread about analytical variability. Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes
{}
# Introduction Clustering is an unsupervised machine learning algorithm that groups entities, from a dataset, that have high degree of similarity in the same cluster. Nowadays lots of areas are using these kinds of algorithms to separate datasets into groups in an automated way, and still have a good quality result. The clustering process is not a universal process because there are many groups of datasets, for some of them the kind of metric used is relevant, for others the entities that represent each cluster are more interesting. Like dataset groups there are many clustering algorithms each one tries to take advantage of data type, so this way each one of them is more suited for a specific kind of data. This section will explain a little more about the Partitioning Around Medoids (PAM) Algorithm, showing how the algorithm works, what are its parameters and what they mean, an example of a dataset, how to execute the algorithm, and the result of that execution with the dataset as input. # The Partitioning Around Medoids (PAM) Algorithm ## Algorithm The PAM algorithm was developed by Leonard Kaufman and Peter J. Rousseeuw, and this algorithm is very similar to K-means, mostly because both are partitional algorithms, in other words, both break the dataset into groups (clusters), and both work by trying to minimize the error, but PAM works with Medoids, that are an entity of the dataset that represent the group in which it is inserted, and K-means works with Centroids, that are artificially created entity that represent its cluster. The PAM algorithm partitions the dataset of n objects into k clusters, where both the dataset and the number k is an input of the algorithm. This algorithm works with a matrix of dissimilarity, whose goal is to minimize the overall dissimilarity between the representants of each cluster and its members. The algorithm uses the following model to solve the problem: ${\displaystyle F(x)={\text{minimize}}\sum _{i=1}^{n}\sum _{j=1}^{n}d(i,j)z_{ij}}$ Subject to: ${\displaystyle {\begin{array}{lll}\sum _{i=1}^{n}z_{ij}=1,j=1,2,..,n\\z_{ij}\leq y_{i},i,j=1,2,..,n\\\sum _{i=1}^{n}y_{i}=k,k={\text{number of clusters}}\\y_{i},z_{ij}\in \{0,1\},i,j=1,2,..,n\end{array}}}$ Where F(x) is the main function to minimize, d(i,j) is the dissimilarity measurement between the entities i and j, and zij is a variable that ensures that only the dissimilarity between entities from the same cluster will be computed in the main function. The others expressions are constraints that have the following functions: (1.) ensures that every single entity is assigned to one cluster and only one cluster, (2.) ensures that the entity is assigned to its medoid that represents the cluster, (3.) ensures that there are exactly k clusters and (4.) lets the decision variables assume just the values of 0 or 1. The PAM algorithm can work over two kinds of input, the first is the matrix representing every entity and the values of its variables, and the second is the dissimilarity matrix directly, in the latter the user can provide the dissimilarity directly as an input to the algorithm, instead of the data matrix representing the entities. Either way the algorithm reaches a solution to the problem, in a general analysis the algorithm proceeds this way: Build phase: 1. Choose k entities to become the medoids, or in case these entities were provided use them as the medoids; 2. Calculate the dissimilarity matrix if it was not informed; 3. Assign every entity to its closest medoid; Swap phase: 4. For each cluster search if any of the entities of the cluster lower the average dissimilarity coefficient, if it does select the entity that lowers this coefficient the most as the medoid for this cluster; 5. If at least one medoid has changed go to (3), else end the algorithm. ## Implementation THIS IS NOT DESCRIBING THE "PAM" ALGORITHM. This is a k-means variant using medoids, but not PAM with its charateristic BUILD and SWAP phases. The pseudocode of PAM algorithm is shown below: Algorithm 1: PAM Algorithm Input: E = {e1,e2,...en} (dataset to be clustered or matrix of dissimilarity) k (number of clusters) metric (kind of metric to use on dissimilarity matrix) diss (flag indicating that E is the matrix of dissimilarity or not) Output: M = {m1,m2,...,mk} (vector of clusters medoids) L = {l(e) | e = 1,2,...,n} (set of cluster labels of E) foreach mi € M do mi ← ej € E; (e.g. random selection) end if diss ≠ true Dissimilarity ← CalculateDissimilarityMatrix(E, metric); else Dissimilarity ← E; end repeat foreach ei € E do l(ei) ← argminDissimilarity(ei, Dissimilarity, M); end changed ← false; foreach mi € M do Mtmp ← SelectBestClusterMedoids(E, Dissimilarity, L); end if Mtmp ≠ M M ← Mtmp; changed ← true; end until changed = true; In the R programming language, the PAM algorithm is available in the cluster package and can be called by the following command: pam(x, k, diss, metric, medoids, stand, cluster.only, do.swap, keep.diss, keep.data, trace.lev) Where the parameters are: x: numerical data matrix representing the dataset entities, or can be the dissimilarity matrix, it depends on the value of the diss parameter. In case x is a data matrix each row is an entity and each column is an variable, and in this case missing values are allowed as long as every pair of entities has at least one case not missing. In case x is a dissimilarity matrix it is not allowed to have missing values. k: number of clusters that the dataset will be partitioned where 0 < k < n, where n is the number of entities. diss: logical flag, if it is TRUE x is used as the dissimilarity matrix, if it is FALSE, then x will be considered as a data matrix. metric: an string specifying each of the two metrics will be used to calculate the dissimilarity matrix, the metric variable can be “euclidean” to use the Euclidean distance, or can be “manhattan” to use the Manhattan distance. stand: logical flag, if it is TRUE then the measurements in x will be standardized before calculating the dissimilarities. Measurements are standardized for each column, by subtracting the column's mean value and dividing by the variable's mean absolute deviation. If x is a dissimilarity matrix then this parameter is ignored. cluster.only: logical flag, if it is TRUE, only the clustering will be computed and returned. do.swap: logical flag, indicates if the swap phase should happen (TRUE) or not (FALSE). keep.diss: logical flag indicating if the dissimilarities should (TRUE) or not (FALSE) be kept in the result. keep.data: logical flag indicating if the input data x should (TRUE) or not (FALSE) be kept in the result. trace.lev: an numeric parameters specifying a trace level for printing diagnostics during the build and swap phase of the algorithm. Default 0 does not print anything. The PAM algorithm return a pam object that contains the information about the result of the execution of the algorithm. ## Visualization In R there are two ways of seeing the result of the PAM algorithm, the first one is to print the object that the algorithm returns, and the second one is to plot the data from the object creating a graphic of the result. The first way of visualizing the information is a bit more complicated to understand but it gives a more complete and accurate information, but the second way is a lot more easy to understand and lets the user have a better view of the information and to add information that would be relevant for him. To view the data of the result of the execution of PAM algorithm in a textual way there are two ways one more simple that gives a more summarized information about the object, and another one that gives you a more complete information about it. In the two commands listed below the first one prints the information in a summarized way, while the second one prints it in a more complete way. print (result) summary (result) The other way of visualizing the data from the result of the execution of the algorithm is using graphics and that can be done by using the following command: plot (result) Example: To show an example of use of the algorithm and a result from its execution a simple dataset with few entities and few dimension was used, as it is shown in the following table: Table 1: Simple dataset Object Attribute x Attribute y 1 1 1 2 2 3 3 1 2 4 2 2 5 10 4 6 11 5 7 10 6 8 12 5 9 11 6 As we can see the data is separated in two clusters, so we will use an k = 2. The PAM algorithm can be executed as follows: #load the table from a file #execute the pam algorithm with the dataset created for the example result <- pam(x, 2, FALSE, "euclidean") #print the results data in the screen summary(result) #plot a graphic showing the clusters and the medoids of each cluster plot(result$data, col = result$clustering) points(result$medoids, col = 1:2, pch = 4) Printing the result form the execution gives you: Medoids: ID x y 4 4 2 2 6 6 11 5 Clustering vector: 1 2 3 4 5 6 7 8 9 1 1 1 1 2 2 2 2 2 Objective function: build swap 1.255618 0.915849 Numerical information per cluster: size max_diss av_diss diameter separation [1,] 4 1.414214 0.8535534 2.236068 8.062258 [2,] 5 1.414214 0.9656854 2.236068 8.062258 Isolated clusters: L-clusters: character(0) L*-clusters: [1] 1 2 Silhouette plot information: cluster neighbor sil_width 3 1 2 0.8898942 4 1 2 0.8788422 1 1 2 0.8549629 2 1 2 0.8297000 6 2 1 0.8790384 9 2 1 0.8631441 8 2 1 0.8425790 7 2 1 0.8232848 5 2 1 0.7747713 Average silhouette width per cluster: [1] 0.8633498 0.8365635 Average silhouette width of total data set: [1] 0.8484685 36 dissimilarities, summarized : Min. 1st Qu. Median Mean 3rd Qu. Max. 1.0000 1.4142 8.3951 6.1559 9.9362 11.7050 Metric : euclidean Number of objects : 9 Available components: [1] "medoids" "id.med" "clustering" "objective" "isolation" "clusinfo" "silinfo" "diss" "call" [10] "data" While plotting gives you: ## Case Study In this section we will see a case study using PAM. ### Scenario In this case study a part of the database iris available in the R package datasets was used. This famous (Fisher’s or Anderson’s) iris data set gives the measurements in centimeters of the variables sepal length and width and petal length and width, respectively, for 50 flowers from each of 3 species of iris. The species are Iris setosa, versicolor, and virginica. Using the data that this dataset provides us, it is natural to think of verifying if the flowers of each one of three species of iris are really similar to the others from the same species, so in this case study length and width of both petal and sepal will be used to cluster the dataset into 3 groups and then verify if the clusters really match with the flowers species. The dataset that was used into this case study consist of the following columns: • Flower: An id of the flower; • Sepal.Length: A numeric value of the length of the sepal in centimeters; • Sepal.Width: A numeric value of the width of the sepal in centimeters; • Petal.Length: A numeric value of the length of the petal in centimeters; • Petal.Width: A numeric value of the width of the petal in centimeters; • Species: A text identifying the species of the flower. ### Input Data The input data is a table consisting of 50% (75 entities) of the original iris dataset that have 150 flowers and 5 attributes each. So the dataset used in this case study is represented by the following table: Table 2: Sample from iris dataset Flower Sepal.Length Sepal.Width Petal.Length Petal.Width Species 1 5.1 3.5 1.4 0.2 setosa 2 4.9 3.0 1.4 0.2 setosa 3 4.7 3.2 1.3 0.2 setosa 4 4.6 3.1 1.5 0.2 setosa 5 5.0 3.6 1.4 0.2 setosa 6 5.4 3.9 1.7 0.4 setosa 7 4.6 3.4 1.4 0.3 setosa 8 5.0 3.4 1.5 0.2 setosa 9 4.4 2.9 1.4 0.2 setosa 10 4.9 3.1 1.5 0.1 setosa 11 5.4 3.7 1.5 0.2 setosa 12 4.8 3.4 1.6 0.2 setosa 13 4.8 3.0 1.4 0.1 setosa 14 4.3 3.0 1.1 0.1 setosa 15 5.8 4.0 1.2 0.2 setosa 16 5.7 4.4 1.5 0.4 setosa 17 5.4 3.9 1.3 0.4 setosa 18 5.1 3.5 1.4 0.3 setosa 19 5.7 3.8 1.7 0.3 setosa 20 5.1 3.8 1.5 0.3 setosa 21 5.4 3.4 1.7 0.2 setosa 22 5.1 3.7 1.5 0.4 setosa 23 4.6 3.6 1.0 0.2 setosa 24 5.1 3.3 1.7 0.5 setosa 25 4.8 3.4 1.9 0.2 setosa 51 7.0 3.2 4.7 1.4 versicolor 52 6.4 3.2 4.5 1.5 versicolor 53 6.9 3.1 4.9 1.5 versicolor 54 5.5 2.3 4.0 1.3 versicolor 55 6.5 2.8 4.6 1.5 versicolor 56 5.7 2.8 4.5 1.3 versicolor 57 6.3 3.3 4.7 1.6 versicolor 58 4.9 2.4 3.3 1.0 versicolor 59 6.6 2.9 4.6 1.3 versicolor 60 5.2 2.7 3.9 1.4 versicolor 61 5.0 2.0 3.5 1.0 versicolor 62 5.9 3.0 4.2 1.5 versicolor 63 6.0 2.2 4.0 1.0 versicolor 64 6.1 2.9 4.7 1.4 versicolor 65 5.6 2.9 3.6 1.3 versicolor 66 6.7 3.1 4.4 1.4 versicolor 67 5.6 3.0 4.5 1.5 versicolor 68 5.8 2.7 4.1 1.0 versicolor 69 6.2 2.2 4.5 1.5 versicolor 70 5.6 2.5 3.9 1.1 versicolor 71 5.9 3.2 4.8 1.8 versicolor 72 6.1 2.8 4.0 1.3 versicolor 73 6.3 2.5 4.9 1.5 versicolor 74 6.1 2.8 4.7 1.2 versicolor 75 6.4 2.9 4.3 1.3 versicolor 101 6.3 3.3 6.0 2.5 virginica 102 5.8 2.7 5.1 1.9 virginica 103 7.1 3.0 5.9 2.1 virginica 104 6.3 2.9 5.6 1.8 virginica 105 6.5 3.0 5.8 2.2 virginica 106 7.6 3.0 6.6 2.1 virginica 107 4.9 2.5 4.5 1.7 virginica 108 7.3 2.9 6.3 1.8 virginica 109 6.7 2.5 5.8 1.8 virginica 110 7.2 3.6 6.1 2.5 virginica 111 6.5 3.2 5.1 2.0 virginica 112 6.4 2.7 5.3 1.9 virginica 113 6.8 3.0 5.5 2.1 virginica 114 5.7 2.5 5.0 2.0 virginica 115 5.8 2.8 5.1 2.4 virginica 116 6.4 3.2 5.3 2.3 virginica 117 6.5 3.0 5.5 1.8 virginica 118 7.7 3.8 6.7 2.2 virginica 119 7.7 2.6 6.9 2.3 virginica 120 6.0 2.2 5.0 1.5 virginica 121 6.9 3.2 5.7 2.3 virginica 122 5.6 2.8 4.9 2.0 virginica 123 7.7 2.8 6.7 2.0 virginica 124 6.3 2.7 4.9 1.8 virginica 125 6.7 3.3 5.7 2.1 virginica ### Execution The process was done as follows: #import data data <- read.table(“sampleiris.txt”) #execution result <- pam(data[1:4], 3, FALSE, “euclidean”) #print results summary(result) #plot clusters plot (data, col = result$clustering) #add the medoids to the plot points(result\$medoids, col = 1:3, pch = 4) ### Output The following data was printed as result of the execution: Medoids: ID Sepal.Length Sepal.Width Petal.Length Petal.Width 8 8 5.0 3.4 1.5 0.2 64 39 6.1 2.9 4.7 1.4 103 53 7.1 3.0 5.9 2.1 Clustering vector: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 51 52 53 54 55 56 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 101 102 103 104 105 106 107 108 109 110 111 112 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 2 3 3 3 3 2 3 3 3 2 2 113 114 115 116 117 118 119 120 121 122 123 124 125 3 2 2 3 3 3 3 2 3 2 3 2 3 Objective function: build swap 0.7148339 0.6990539 Numerical information per cluster: size max_diss av_diss diameter separation [1,] 25 1.236932 0.5137400 2.042058 1.9000000 [2,] 34 1.951922 0.8085343 2.727636 0.3741657 [3,] 16 1.284523 0.7559609 2.147091 0.3741657 Isolated clusters: L-clusters: [1] 1 L*-clusters: character(0) Silhouette plot information: cluster neighbor sil_width 1 1 2 0.84941732 5 1 2 0.84830238 8 1 2 0.84812593 18 1 2 0.84784555 12 1 2 0.83221128 22 1 2 0.82890349 20 1 2 0.82456328 3 1 2 0.82337894 7 1 2 0.81910409 10 1 2 0.81662688 11 1 2 0.80769429 2 1 2 0.80592613 13 1 2 0.80278163 4 1 2 0.79810574 23 1 2 0.79482977 24 1 2 0.78999596 17 1 2 0.78539723 21 1 2 0.78454015 25 1 2 0.77452963 6 1 2 0.75995941 9 1 2 0.74605493 14 1 2 0.74277337 19 1 2 0.72082914 15 1 2 0.71581750 16 1 2 0.66155611 68 2 3 0.60036142 56 2 3 0.59753885 62 2 3 0.59698924 72 2 3 0.59691421 70 2 3 0.59514179 54 2 3 0.58507022 67 2 3 0.56989428 60 2 1 0.56350914 63 2 3 0.55592514 75 2 3 0.54720666 74 2 3 0.53971473 64 2 3 0.53757677 69 2 3 0.51098390 65 2 1 0.50762488 107 2 3 0.48295375 55 2 3 0.46851074 52 2 3 0.46827948 59 2 3 0.44164146 66 2 3 0.42147865 71 2 3 0.41421605 73 2 3 0.41282512 122 2 3 0.40891392 120 2 3 0.40207904 57 2 3 0.39510378 114 2 3 0.37176468 124 2 3 0.34854822 102 2 3 0.33532624 61 2 1 0.32662688 58 2 1 0.20142024 51 2 3 0.19024422 115 2 3 0.16320750 53 2 3 0.11554863 112 2 3 -0.07433144 111 2 3 -0.07748205 103 3 2 0.59622203 106 3 2 0.59241159 108 3 2 0.58027197 110 3 2 0.56716967 123 3 2 0.56182697 121 3 2 0.55568135 119 3 2 0.53242285 118 3 2 0.52551154 125 3 2 0.51206488 105 3 2 0.49243542 101 3 2 0.45749953 113 3 2 0.44409513 109 3 2 0.37181492 117 3 2 0.26375026 116 3 2 0.21777715 104 3 2 0.21412781 Average silhouette width per cluster: [1] 0.7931708 0.4153331 0.4678177 Average silhouette width of total data set: [1] 0.5524757 2775 dissimilarities, summarized : Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1000 1.1136 2.5080 2.6329 3.9006 7.0852 Metric : euclidean Number of objects : 75 Available components: [1] "medoids" "id.med" "clustering" "objective" "isolation" "clusinfo" "silinfo" "diss" "call" [10] "data" And the following graphic was generated as well: ### Analysis After the execution of the algorithm and the analysis of the data it was possible to tell that the clusters were well grouped and correlated with the species of each flower. In the data there was a total of 75 elements, 25 from Setosa species, 25 from Versicolor species and 25 from Virginica species, and the algorithm clustered the elements from Setosa as cluster 1, the ones from Versicolor as cluster 2 and the ones from Virginica as cluster 3. After verifying the results we find that out of 75 elements, 66 were correctly clustered, giving an error margin of 12%—which is a very good result. # References 1. The R Development Core Team, R: A Language and Environment for Statistical Computing. 2. Kaufman, L., Rousseeuw, P. J., Clustering by Means of Medoids.
{}
# I'm so behind in physics, Circular Rotation help. by GRice40 Tags: circular, physics, rotation P: 20 1. The problem statement, all variables and given/known data The "Giant Swing" at a county fair consists of a vertical central shaft with a number of horizontal arms attached at its upper end. Each arm supports a seat suspended from a 5.00m long cable, the upper end of which is fastened to the arm at a point 3.00 m from the central shaft. Find the time of one rotation if the angle produced from the cable connecting to the arm is 30 degrees from vertical. 2. Relevant equations Circumference= distance = 2 (pi) R Acceleration = V^2/R F = m(V^2/R) 3. The attempt at a solution I honestly have no clue how to do this one. I keep working myself in circles, which doesn't seem to be getting anything accomplished. Any help on where I should start? P: 610 is there any figure given with this.......i have trouble imagining this.... Mentor P: 11,625 Quote by GRice40 I honestly have no clue how to do this one. I keep working myself in circles, which doesn't seem to be getting anything accomplished. Any help on where I should start? It's always best to start with a diagram. Draw a cross section showing the vertical pole, a cross beam, and one cable (at 30 degrees from vertical) with a mass on the end. Identify the center of rotation of the mass and the radius of the circle it will follow. After that you'll be drawing an FBD and working out the components of the accelerations acting. P: 20 I'm so behind in physics, Circular Rotation help. There was a figure with the problem, but I can't save it to upload here. I've been working through it, and it seems that if I could find the velocity, I'd be able to solve it. Any tips on how to go about finding velocity? P: 20 Here's what I have so far: Radius: 3 + 5sin(30) = 5.5 m V = d/t = 34.56/t Circumference = 2(pi)(5.5)= 34.56 Acceleration = V^2/R = (34.56/t)^2/5.5 That's about as far as I can get without another variable that I know =/ P: 610 without figure I can't help..... difficult to see the situation Mentor P: 11,625 Quote by GRice40 Here's what I have so far: Radius: 3 + 5sin(30) = 5.5 m V = d/t = 34.56/t Circumference = 2(pi)(5.5)= 34.56 Acceleration = V^2/R = (34.56/t)^2/5.5 That's about as far as I can get without another variable that I know =/ Good so far! Now, ask yourself why the cables are hanging at an angle of 30° to the vertical rather than flying out horizontally. P: 20 Hope this works, we'll see! Attached Thumbnails P: 20 Hmm, gravity pulling them down would be my guess? P: 610 that helps....... set up free body diagram for the girl........there is tension in the wire P: 20 I assume that because she is staying in place in the y direction, that the y direction is in equilibrium. So here's what I have: Tcos(30) = m * (9.8) (Another possibility that is coming to mind is: Force(out) + W = T) (Here's the other possibility running through my mind: Force(out) = Tsin(30) for the x direction) HW Helper P: 2,316 Quote by GRice40 Here's what I have so far: Radius: 3 + 5sin(30) = 5.5 m V = d/t = 34.56/t Circumference = 2(pi)(5.5)= 34.56 Acceleration = V^2/R = (34.56/t)^2/5.5 That's about as far as I can get without another variable that I know =/ It would be useful to know the other formula for centripetal acceleration Ac = V2/R or 4∏2R/T2 Where T is the period. P: 610 which component of tension is acting as a centripetal force ? P: 20 The centripetal force should be Tsin(30), I think? HW Helper P: 2,316 Quote by GRice40 The centripetal force should be Tsin(30), I think? That's true. Did you read post #12 P: 610 yes........ we also know that the centripetal force is $$\frac{mv^2}{r}$$ and you already said $$T\cos(30)=mg$$ so can you manipulate these P: 20 Ok here's what I got: F(out) = m*a1 = Tsin(30) W = M*a2 = Tcos(30) So if you divide the two you get: (m*a1/m*a2) = Tsin(30)/Tcos(30) which reduces to (a1/a2) = sin(30)/cos(30) or (a1/9.8) = .577 .577(9.8) = a1 = 5.66 m/s^2 So 5.66 = 4(pi)^2R/t^2, 5.66 = 217.13/t^2 t^2 = 217.13/5.66 t = sqrt(217.13/5.66) = 6.19 seconds That's what the answer is supposed to be. Thank you so much guys, I've been working on that problem for HOURS! P: 610 don't forget to put figures when they are provided......without the figures its difficult for people here to understand........always post the problem AS it appears in the text..... don't psycho-analyse the problem and write what YOU think the problem statement is... you will likely get more help when you present the problem and your attempted work in organised manner..... good luck $$\smile$$ Related Discussions Introductory Physics Homework 2 Introductory Physics Homework 13 Introductory Physics Homework 2 Introductory Physics Homework 2 Introductory Physics Homework 1
{}
# The comparative year- end SFPs for Hilary Co. were as follows: .:>Shares were issued when the... The comparative year- end SFPs for Hilary Co. were as follows: .:>Shares were issued when the exchange rate was $0.64.The land was purchased at the end of 20X3. The bonds were issued at the end of 20X1, and mature at the end of 20X7. The exchange rate at the end of 20X1 was$ 0.74. Dividends are declared and paid at the end of each year. The retained earnings at the end of 20X2 were earned at an average rate of \$ 0.79. Required1. Translate the 20X2 and 20X3 SFPs, using the current- rate method. (Note that each year’s SFP is translated at the current rate at that year’s SFP date.)2. Translate the 20X2 and 20X3 SFPs, using the temporal method.3. Calculate the translation gain or loss for 20X3, under each translation method. P9-13 View Solution: The comparative year end SFPs for Hilary Co were as ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker
{}
Browse Questions # If $y = [ \log (x+\sqrt{1+x^2} ]^2$, show that $(1+x^2)\large\frac{d^2y}{dx^2}$$+x\large\frac{dy}{dx}$$-2=0$ Can you answer this question?
{}
# Using gas laws to find density An unknown gas at $\mathrm{63.1\ ^\circ C}$ and $\mathrm{1.05\ atm}$ has a molar mass of $\mathrm{16.04\ g/mol}$. What is the density of the gas? I know I need to use the $pV=nRT$ equation but it gives me g/mol and not moles. I'm not really sure what to do here. $$1.05\ V = n\ 0.08206 \cdot (63.1+273.15)$$ Ok that's as far as I got. I have no idea how would I even go about converting g/mol back to moles so I can solve for V, and after that I would also need to figure out a way to find grams so I can divide g/v to get the density. • Put 1 L for V so you can see how many moles there’d be in a liter. Convert between mass and moles by the equation mass = moles * molar mass. Divide your mass by the volume (1 L) and you’ll get the density in grams per liter. – lightweaver Oct 3 '15 at 1:39 • @lightweaver Thank you very much, so if a variable in any equation isn't specified is it automatically assumed to be 1? – user3882522 Oct 3 '15 at 1:52 • In this case you can set one of the variables to 1 as density is known as an intensive variable, meaning that its value is independent of how much material there is. So it could be any number, but the value will remain the same. We just set it to 1 as it simplifies the calculations. – Nanoputian Oct 3 '15 at 1:56 • In my answer I have done the same thing, but I made the number of moles 1 and then see how many litres there are in 1 mole. Either way, you will get the same answer – Nanoputian Oct 3 '15 at 1:58 • The molar density is n/V, so the mass density is Mn/V, where M is the molar mass, n is the number of moles, and V is the volume. So Mn/V= (pM)/RT. – Chet Miller Oct 29 '15 at 2:55 Okay, to find the density of the gas, to need to know its mass and its volume. So to do this, lets say we have one mole of this substance. So its mass will be $16.04~\mathrm g$. Now all we need to do is find its volume. This can be done by rearranging the ideal gas equation: $$V = \frac{nRT}{p}$$ Now all we have to do is plug in the values. However you have be careful that you use the write units. This is a common mistake, especially when using the ideal gas law however it is good to see that you have used the correct units: \begin{align}V &= \mathrm{\frac{1~mol\times0.08206~L~ atm~ mol^{-1}~K^{-1}\times336.25~K}{1.05~atm} }\\ &=\mathrm{26.28~L}\end{align} Now, density can be calculated: $$\rho =\mathrm{\frac{16.04~g}{26.28~L} = 0.61~g~L^{-1}}$$
{}
Frictional force is independent of the area of contact 1. Aug 9, 2009 monty37 one of the laws of friction states that the frictional force is independent of the area of contact,and velocity,how true is this? my book says this particular law is only approximately true. 2. Aug 9, 2009 Pupil Re: friction As far as I know, when one says friction does not depend on the area of contact, they do it by looking at the equation for the force of friction. $$F = \mu F_n$$ So the force of friction depends only on the unit-less constant Mu and normal force. Neither of these things depends on the area, so we say the force of friction doesn't depend on area. Someone might be able to give a deeper understanding of why, but that's how I've always thought of it. 3. Aug 9, 2009 dx Re: friction It's possible to understand roughly in this way: Divide the surface area of the floor into small pieces. Now, the number of these tiny pieces that are pushing the block is proportional to A, the contact area, but the force that each of these little pieces applies to the object is proportional to the pressure exerted on the floor at that point, i.e. to N/A (where N is the normal force), so the total friction force is proportional to A(N/A) = N. 4. Aug 9, 2009 Staff: Mentor Re: friction Another way of looking at it: all of the properties of the contact patch, including its area, are contained in the friction coefficient. 5. Aug 9, 2009 rcgldr Re: friction The book is correct, it's just an approximation that doesn't apply in real life, especially if the objects get reasonably small. This is discussed and demonstrated as an off topic subject in the second half of video #2 in this series on gyroscopes: http://www.gyroscopes.org/1974lecture.asp In that video, the smaller but otherwise identical (same density) "cubes" have a much higher static coefficient of friction. In the case of tires, load sensitivity causes coefficient of friction to decrease with load, so larger tires are better until weight, drag, or other factors become an issue. (Larger tires also allow for more heat dissapation in race car). 6. Aug 10, 2009 J_Cervini Re: friction The way I remember it ('88 grad) - you have two types of friction: Static - this is related to the force necessary to get a stationary object to move from rest. Dynamic - this is related to the force necessary to sustain a moving object at a constant velocity. Does the question pertain to both phases of friction or just one? 7. Aug 11, 2009 Shooting Star Re: friction That would imply that the co-efficient of friction between the same pair of materials would vary with the area of contact, which is not the case. This makes good sense. 8. Aug 11, 2009 KLoux Re: friction Actually, this is the case. It probably varies with materials, but one example is car tires on asphalt. High performance cars have wider tires to increase the size of the contact patch and in turn the coefficient of friction. Maybe this is a different case since the rubber is compliant and can be squeezed into the surface of the not-perfectly-smooth asphalt, but my college physics book also states that the coefficient of friction takes into account the area (and other factors, like whether or not one material is compliant and can squeeze into cracks of the other material). If your area of contact changes, you might need a new coefficient of friction, depending on the materials, allowable error, etc. -Kerry 9. Aug 11, 2009 Pupil Re: friction This is something that bothered me a bit (like your tire example). How is Mu calculated between material A and A'? Testing it would be an easy method. Suppose A and A' are made of the same materials, but A' is twice the length and width of A. Would A''s coefficient of friction be larger than A's in the real world? If so, then Mu is dependent on the area, and thus the force of friction does as well. 10. Aug 11, 2009 Nabeshin Re: friction These kind of experiments are common in a first year mechanics course, and show that the coefficient of friction is not correlated with the area of contact. Of course, when I performed the experiments it was in a crumby little lab and any small effects would have been completely surpassed by experimental error, so at most I can say is that there is a very loose correlation if at all. The example of a racecar's tires was explained to me to have to due with things other than just normal sliding friction. It was explained to me that because the tires got so hot from friction, the rubber became, in a sense, sticky, which gave rise to a different force which is correlated with contact area. 11. Aug 11, 2009 Shooting Star Re: friction No, it is not the case in the regime where the laws of static and dynamic friction are valid -- otherwise those laws woudn't exist. I was not consideiing rolling friction, so the tire scenario is not very pertinent to my point.. This point is well exemplified by Pupil in post #9. 12. Aug 11, 2009 rcgldr Re: friction They're not laws, but instead simplifications of a real life situation, similar to Bernoulli equation being a simplified case of the more accurate Navier-Stokes equations. Please watch the 2nd half of the video #2: http://www.gyroscopes.org/1974lecture.asp A clean flat plate with 4 solid blocks of varying sizes is angled upwards, and the larger blocks begin sliding well before the smaller blocks. The smaller blocks exhitbit a higher coefficient of friction with the flat plate, demonstrating some form of load senstitivy (the smaller blocks exert a smaller force per unit area). From wiki: though in general the relationship between normal force and frictional force is not exactly linear : http://en.wikipedia.org/wiki/Friction 13. Aug 11, 2009 Staff: Mentor Re: friction The tire scenario is a static friction scenario. Yes, it is usually true that static friction does not vary with contact area, but the OP said: ...and static friction in tires is one example where it is not true. I suppose there is no single set of equations that people generally refer to for friction, but one could write one with separate terms for how friction force varies with area. In cases where it doesn't those terms would simply cancel out, similar to the way Bernouli's equation is used. 14. Aug 12, 2009 monty37 Re: friction when the area of contact becomes lesser,pressure increases and then the law does not hold true,and is this not the case for all kinds of friction:rolling,sliding,static,dynamic? 15. Aug 18, 2009 Shooting Star Re: friction (Sorry for the delayed post.) I believe that applies to all the laws of science. The point was that the equation for static fiction holds quite true within a narrow regime. Or does it? The video was most illuminating. Yes, my oversight. I was thinking of something else. Could you give a simple example where the area is explicitly involved and which would reduce to an area-independent equation for simple cases?
{}
## AI 生成解读视频 AI抽取解析论文重点内容自动生成视频 AI解析本论文相关学术脉络 # Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program., no. 1-2 (2014): 1-38 EI In this paper we develop a randomized block-coordinate descent method for minimizing the sum of a smooth and a simple nonsmooth block-separable convex function and prove that it obtains an $$\varepsilon$$-accurate solution with probability at least $$1-\rho$$ in at most $$O((n/\varepsilon ) \log (1/\rho ))$$ iterations, where $$n$$ is t...更多 0
{}
# isWellDefined(ZZ,LieAlgebraMap) -- whether a Lie map is well defined ## Synopsis • Function: isWellDefined • Usage: b=isWellDefined(n,f) • Inputs: • Outputs: • b, , true if $f$ is well defined and commutes with the differentials up to degree $n$, false otherwise ## Description It is checked that the map $f: M \ \to\ L$ maps the relations in $M$ to 0 up to degree $n$ and that $f$ commutes with the differentials in $M$ and $L$. If $n$ is big enough and ideal(M) is of type List, then it is possible to get that $f$ maps all relations to 0, which is noted as the message "the map is well defined for all degrees". This may happen even if the map does not commute with the differential (see g in the example below). i1 : L=lieAlgebra({a,b},Signs=>1,LastWeightHomological=>true, Weights=>{{1,0},{2,1}}) o1 = L o1 : LieAlgebra i2 : F=lieAlgebra({a,b,c}, Weights=>{{1,0},{2,1},{5,2}},Signs=>1,LastWeightHomological=>true) o2 = F o2 : LieAlgebra i3 : D=differentialLieAlgebra{0_F,a a,a a a b} o3 = D o3 : LieAlgebra i4 : Q1=D/{a a a a b,a b a b + a c} o4 = Q1 o4 : LieAlgebra i5 : use F i6 : Q2=F/{a a a a b,a b a b + a c} o6 = Q2 o6 : LieAlgebra i7 : f=map(D,Q1) warning: the map might not be well defined, use isWellDefined o7 = f o7 : LieAlgebraMap i8 : isWellDefined(6,f) the map is not well defined the map commutes with the differential for all degrees o8 = false i9 : g=map(Q1,Q2) warning: the map might not be well defined, use isWellDefined o9 = g o9 : LieAlgebraMap i10 : isWellDefined(6,g) the map is well defined for all degrees the map does not commute with the differential o10 = false i11 : h=map(Q1,D) o11 = h o11 : LieAlgebraMap i12 : isWellDefined(6,h) the map is well defined for all degrees the map commutes with the differential for all degrees o12 = true
{}
## FANDOM 771 Pages Gravitational mass deep inside does radiate and the radiation propagate through the outside material to the surface. If the material is saturated, gravitational mass, that is, radiational pressure is additive. Even though not, Steady state partial radiational pressure is additive on the surface. Actual inside temperature is rather high. ## Energy conservation approachEdit Energy is proportional to gravitational mass, $E = C_M T_0 (M_i T/T_0)$ where Cm is specific heat. $E = C_M T_0 M_{pg}$ Energy is additive. but Cm is dependent on density. $\Delta E = C_M T_0 \Delta M_{pg}$ $\Delta M_{pg} =\Delta E /C_M T_0$
{}
# If $\int_Af_nd\mu\to\int_A fd\mu$ for all $A$, what type of convergence $f_n\to f$ do we have? I want to relate strong convergence of measures and usual notions of convergence for functions. Let $(X,\mathcal{A})$ be a measurable space Definition: A sequence of probability measures $\mu_n$ converges strongly to $\mu$ if $\mu_n(A)\to\mu(A)$ for all $A\in\mathcal{A}$. Now suppose $\mu$ is a probability measure, $f_n,f$ are nonnegative measurable functions with $\int_X f_nd\mu=\int_X fd\mu=1$. Set $\nu_n(A)=\int_Af_nd\mu$ and $\nu(A)=\int_A fd\mu$. Strong convergence of $\nu_n\to \nu$ becomes as follows: $\int_A f_nd\mu\to\int_A fd\mu$ for all $A\in\mathcal{A}$. Question: If $\nu_n\to \nu$ strongly, does $f_n\to f$ pointwise a.e.? In measure? Almost uniformly? In any other sense? We can also state everything "backwards", in an equivalent manner: If $\nu_n,\nu$ are given probability measures, set $\mu=\nu/2+\sum_{n=1}^\infty 2^{-n-1}\nu_n$ and $f_n=d\nu_n/d\mu$, $f=d\nu/d\mu$. The same question above applies. All I could get with Fatou's Lemma is that $f\geq\liminf f_n$, which doesn't seem really useful. • This is far from an answer, but convergence in $L^1$ certainly suffices since $$|\nu_n(A) - \nu(A)| = \left|\int_Af_n-fd\mu\right| \leq \int_A|f_n-f|d\mu \leq \int_X|f_n-f|d\mu \xrightarrow{n \to\infty} 0$$ – Guido A. Jun 27 '18 at 5:48 As a counterexample consider $f \equiv 1$ on $[0,1]$ and $f_n (x) = 1+\cos(2\pi n x)$. It is then not hard to see $\int f_n g d\mu \to \int f g d \mu$ for all $g \in L^2$, where I am considering the Lebesgue measure. To see this, use the density of $C_c^\infty$ in $L^2$, and for a $C_c^\infty$ function use partial integration. Now, note $\int |f-f_n| d\mu \geq c >0$. But if we would have $f_n \to f$ in measure, then the dominated convergence would imply $\int |f-f_n|d\mu \to 0$. Finally, since the indicator functions span a dense subset of $L^\infty$, your convergence is equivalent to $f_n \to f$ in the weak sense in $L^1$ (this also uses that $\|f_n\|_{L^1}$ is bounded).
{}
Přejít k obsahu  ### Newton's method for solving inclusions using set-valued approximations Citace: CIBULKA, R. Newton's method for solving inclusions using set-valued approximations. Mariánská, Česká republika, 2014. PŘEDNÁŠKA, POSTER eng Newton's method for solving inclusions using set-valued approximations 2014 Ing. Radek Cibulka Ph.D. Given Banach spaces $X$ and $Y$, a single-valued mapping $f: X \to Y$ and a multivalued mapping $F:X\rightrightarrows Y$, we investigate the convergence properties of Newton-type iterative process for solving the generalized equation. The problem is to $$\label{Eqn1} \mbox{find}\quad x\in X \quad \mbox{such that}\quad 0\in f(x)+F(x).$$ This model has been used to describe in a unified way various problems such as equations, inequalities, variational inequalities, and in particular, optimality conditions. We study the following iterative process: {\it Choose a sequence of set-valued mappings $A_k: X\times X\rightrightarrows Y$ approximating the function $f$ and a starting point $x_0 \in X$, and generate a sequence $(x_k)$ in $X$ iteratively by taking $x_{k+1}$ to be a solution to the auxiliary generalized equation $$\label{Newton-Seq} 0\in A_k(x_{k+1},x_k)+F(x_{k+1}) \quad \mbox{for each} \quad \quad k \in \{0,1,2, \dots\}.$$} In the first part, we present a result concerning the stability of metric regularity under set-valued perturbations. In the latter, this statement is applied in the study of (super-)linear convergence of the iterative process (\ref{Newton-Seq}). Also some particular cases are discussed in detail. This work is based on a forthcoming joint paper with Samir Adly and Huynh Van Ngai. Zpět
{}
# Is there an easy way to find the new mean of a product of gaussians? Let's assume we have 2 gaussian functions Allowing some notation abuse they would look like: $$G_i(x) = Ce^{\frac{-1}{2}[(x-\mu_i) / \sigma_i]^2}$$ $$G_j(x) = Ce^{\frac{-1}{2}[(x-\mu_j) / \sigma_j]^2}$$ Where $$C$$ is the constant factor of the Gaussians which I encapsulate here for simplicity. I claim that $$G_i \cdot G_j$$ Is another Gaussian with parameters $$\mu_{ij}, \sigma_{ij}$$ But I am struggling at expressing the product as a new Gaussian to find the new parameters. So far I have this: $$G_i \cdot G_j(x) = Ce^{(\frac{-1}{2}[(x-\mu_j) / \sigma_j]^2) + (\frac{-1}{2}[(x-\mu_j) / \sigma_j]^2)}$$ If we focus on just the exponent and remove the constant factor we get: $$[(x-\mu_j) / \sigma_j]^2 + [(x-\mu_j) / \sigma_j]^2$$ My goal is to reorder the terms in that expression to get something of the form $$[(x-\mu_{ij})/\sigma_{ij}]^2$$ However I got stuck after getting the following: $$\frac{(\sigma_j^2 + \sigma_i^2)x^2 + (\mu_i+\mu_j)(-2x) + (\mu_i^2 + \mu_j^2)}{\sigma_i^2\sigma_j^2}$$ I could keep going by dividing both elements of the fraction by the coefficient in front of the $$x^2$$ term but after that I am not finding an easy way to factor everything back into a singular square term on the numerator (i.e i don't know what do with $$\mu_i^2 + \mu_j^2$$). Can this be done? Am I wrong about my hypothesis? Is there a theorem I can quote to avoid doing the entire expansion myself? • Complete the square; you'll have a constant term left over, but since that's in an exponent you can take that out of the exponent and multiply the constant factor $C$ by something to compensate – alphacapture Apr 4 at 1:46 • Oh right! I forgot that I could kill that term that way. I'll try it tomorrow and see where that leads me. Thank you for the tip. – Makogan Apr 4 at 1:48 If, as @alphacapture commented, you write $$\frac{(x-\mu_1)^2}{\sigma_1^2}+\frac{(x-\mu_2)^2}{\sigma_2^2}=\frac{(x-\mu)^2}{\sigma^2}+\tau$$ completing the square or identifying the coefficients, you should arrive to $$\mu=\frac{\mu_2 \sigma_1^2+\mu_1 \sigma_2^2}{\sigma_1^2+\sigma_2^2}\qquad \sigma=\frac{\sigma_1 \sigma_2}{\sqrt{\sigma_1^2+\sigma_2^2}}\qquad \tau=-\frac{(\mu_1-\mu_2)^2}{\sigma_1^2+\sigma_2^2}$$ and then $$e^{-\frac{(x-\mu_1)^2}{2 \sigma_1^2}}+e^{-\frac{(x-\mu_2)^2}{2 \sigma_2^2}}=e^{-\frac \tau 2}\,e^{\frac{(x-\mu)^2}{2\sigma^2} }$$
{}
# Prove that the commutator is a Lie bracket in $\mathrm{Lie}(G)$ Let $$G$$ be a matrix Lie group (i.e. a closed subgroup of $$\mathrm{GL}(n,\mathbb C)$$), and consider the set (following the notation in the Wikipedia page): $$\mathrm{Lie}(G)\equiv\{X\in M(n,\mathbb C) : \,\,e^{tX}\in G\,\,\forall t\in\mathbb R\}.$$ I know that this turns out to be a Lie algebra with Lie bracket given by the commutator of matrices, but I'm trying to get a better understanding of why this is the case. The case $$G=SO(3)$$, $$\mathfrak g=\mathfrak{so}(3)$$ was worked out in this other question. What about the more general scenario of $$G$$ an arbitrary closed subset of $$\mathrm{GL}(n,\mathbb C)$$? Can a similar argument be made in this case?
{}
###### Exercise7 Which of the following are integers? There may be more than one correct answer. $$-9$$ $$\sqrt{36}$$ $$-17860$$ $$-6.7\overline{51}$$ $$\pi$$ $$\sqrt{11}$$ $$-{\frac{3}{73}}$$ $$0$$ in-context
{}
# Discretionary hyphen destroys kerning and ligature Consider the following example (plain TeX format): VA fi\par V\-A f\-i \bye I tried TeX, pdfTeX, XeTeX, and LuaTeX, but the second line looks the same as the first line only when using LuaTeX. My questions are: 1. Is this an intended behaviour? 2. Is it difficult or impossible to make other TeX engines act like LuaTeX (discretionary hyphen does not destroy kerning and ligature)? 3. How can I work around this problem? • \discretionary{f-}{i}{fi} Nov 10, 2017 at 11:47 \discretionary{f-}{i}{fi} seems to work. You can define a macro to ease the use: \def\dfi{\discretionary{f-}{i}{fi}} \def\VA{\discretionary{V-}{A}{VA}} VA fi\par AD\VA NCE Nar\dfi na \bye Not ideal but if you don't have many problems, you could work with that. In any case, hyphenation is not something user should worry much in theory, do you have the correct language set in your document? That comes with many rules to tell TeX how to break up words. And you also have \hyphenation{wordwithf-ibreak wordwithV-Abreak} to add rules for particular words; it does work well (ligature when not broken, and broken correctly, plus you set it up once for the whole document). Rant/Question: I don't understand why \- is a primitive. What was the need? Couldn't it be just defined in terms of \discretionary? It's one of those doubts I have about TeX :) • Regarding the claim that \- is a primitive: it's not true. Instead, \- gets expanded to \discretionary{-}{}{}. – Mico Nov 10, 2017 at 12:34 • Regarding the last part, I haven't looked into this one, but TeX does contain, for efficiency, primitives that can be defined in terms of others: for example \hss (and arguably even \hfill). Nov 10, 2017 at 12:35 • @Mico I think it is. Not that it cannot be redefined (in fact it is redefined) but it is a primitive originally. ShreevatsaR, What would be the definition of \hss? I always thought \hfill was a macro! And another thing that comes to memory is that I don't understand why the particular definition of \hidewidth :) Nov 10, 2017 at 12:42 • @Mico I just checked with \edef\q{\-} and it does indeed expand (this I thought wouldn't happen), so \- is an expandable primitive that expands to \discretionary{-}{}{}, but it has been defined “by some means other than” \def\-{\discretionary{-}{}{}} (although the result is exaclty the same). @ShreevatsaR Thanks for the info ;) Nov 10, 2017 at 14:24 • @Manuel I tried \edef\q{\-} and it doesn't expand to anything other than \-. Are you sure you tried it in TeX (where \- is a primitive) and not in LaTeX (where \- has been overwritten, defined as a macro)? Nov 10, 2017 at 16:32 By default, the control sequence \- is an abbreviation for \discretionary{-}{}{} (see p. 95 of the TeXbook). The first argument of \discretionary, the prebreak, specifies what gets inserted immediately before the linebreak if a linebreak is inserted (here: a hyphen character). The second argument, the postbreak, specifies what's inserted immediately after the line break if a linebreak occurs (here: nothing). The third argument, the nobreak, specifies what gets typeset if no linebreak occurs -- here: nothing or, more precisely and crucially, an empty token, {}. For the sake of specificity, consider the following example: half\-line. This first gets expanded to half\discretionary{-}{}{}line. Suppose the word does not get broken up at the end of a line. What happens next depends on which TeX engine is in use. • Assuming pdfLaTeX is in use, what gets typeset is half{}line, and the fl ligature is (correctly!) broken up, since that is what's supposed to happen if f{}l is encountered. • In contrast, under LuaLaTeX all text-mode instances of {} are discarded prior to final processing. Therefore, halfline is processed without the {} between f and l, and the fl ligature is (incorrectly in this case) not broken up. • What happens under XeLaTeX depends partly on the version of XeLaTeX that's in use on your system. E.g., if you use TeXLive2018, halfline and half\-line are both typeset with an fl-ligature if no line break occurs between half and line. At some in the not-too-distant past, though, the fl-ligature was suppressed if half\-line was encountered. To verify these claims, simply compile the following document under pdfLaTeX, XeLaTeX, and LuaLaTeX: \documentclass{article} \begin{document} halfline half\-line \end{document} For "plain" pdfTeX, XeTeX, and LuaTeX, simply run halfline half\-line \bye To get all three engines to produce the same output, don't write half\-line. Instead, write hal\discretionary{f-}{l}{fl}ine. That way, the fl-ligature will be used whenever no line break occurs. • Good morning to you. Why do not see in chat? +1. Nov 10, 2017 at 12:47 • @Mico Knuth admits at the outset to be telling some white lies; actually, \- stands for \discretionary{\char\hyphenchar\font}{}{}; if a font sets \hyphenchar to −1, you get a (possible) break with no character inserted. Which explains why LaTeX redefines it to insert, if \hyphenchar\font is negative, the \defaulthyphenchar. Nov 10, 2017 at 14:08 • About \- being a primitive, see tex.web section 1114 (see also section 208); \- and \discretionary are two “flavors” (same eq_type, but different equiv) of the same primitive command. The different actions of these two flavors correspond to the two branches of the outer if in section 1117. (This comment might interest @egreg too.) – GuM Nov 10, 2017 at 18:44 • @GuM You spared me a search! ;-) By the way, the third doubly dangerous paragraph on page 455 of the TeXbook says that \- is equivalent to \discretionary{\charh​ }{}{}, where h is the \hyphenchar of the current font. Nov 10, 2017 at 18:52 • @egreg: …and you didn’t need to search for that because p. 455 is the main (underlined) reference for \- in the index. ;-) – GuM Nov 10, 2017 at 18:56
{}
# Rectangles in rectangles ## Python 2, 66 59 bytes lambda n,k:sum(a%n*(n-a%n)==a/n*(k-a/n)for a in range(n*k)) Try it online! Each possible rectangle inside the $$\n \times k\$$-rectangle can be specified by two integers, $$\0 \le a \lt n\$$ and $$\0 \le b \lt k\$$: To verify a rectangle given $$\a\$$ and $$\b\$$, it suffices to check if one angle is a right angle. To do this I take the dot product of $$\\binom{b}{0}-\binom{0}{a}=\binom{-b}{a}\$$ and $$\\binom{k-b}{n}-\binom{0}{a}=\binom{k-b}{n-a}\$$ to check whether the angle at $$\\binom{0}{a}\$$ is a right angle: $$\langle \left( \begin{matrix} -b \\ a \\ \end{matrix}\right), \left(\begin{matrix} k-b \\ n-a \\ \end{matrix} \right) \rangle = 0 \\\Leftrightarrow a\cdot(n-a)-b\cdot(k-b)=0 \\\Leftrightarrow a\cdot(n-a)=b\cdot(k-b)$$ ## 05AB1E, 10 8 bytes LDI-*¢O Try it online! Commented: # implicit input: [n, k] L # for both values take the [1..x] range # [[1,...,n], [1,...,k]] D # duplicate this list I # push the input [n,k] - # subtract this from the ranges # [[1-n,...,n-n], [1-k,...,k-k]] # =[[-n+1,...,0], [-k+1,...,0]] * # multiply with the ranges # [[1*(-n+1),...,n*0], [1*(-k+1),...,k*0]] # push all lists of this list on the stack ¢ # count the occurences of each value of one list in the other O # sum those counts ## C (gcc), 63 61 bytes Saved 2 thanks to ceilingcat!!! s;a;f(n,k){for(s=a=n*k;a--;)s-=a%n*(n-a%n)!=a/n*(k-a/n);a=s;} Try it online!
{}
# An investment of Rs. 100 lakhs is to be made for construction of a plant, which will take two years to start production. The annual profit from the operation of the plant is Rs. 20 lakhs. What will be the payback time? 5 years 7 years 12 years 10 years Please do not use chat terms. Example: avoid using "grt" instead of "great".
{}
### Surface plasmon resonance sensor based on bimetallic alloys gratingSurface plasmon resonance sensor based on bimetallic alloys grating Access Restriction Subscribed Author Dhibi, A. ♦ Sassi, I. ♦ Oumezzine, M. Source SpringerLink Content type Text Publisher Springer India File Format PDF Copyright Year ©2015 Language English Subject Domain (in DDC) Natural sciences & mathematics ♦ Physics Subject Keyword Surface plasmon resonance ♦ Optical sensor ♦ Bimetallic alloy grating ♦ Physics ♦ Astrophysics and Astroparticles Abstract A surface plasmon resonance sensor based on triangular alloys grating with high performance is proposed. The grating consists of homogeneous binary gold alloys of formula Au$_{ x }$M$_{1−x }$, where M is silver, platinum and palladium. It is observed that the two metals, platinum and palladium, have an almost identical surface plasmon resonance in gold alloys irrespective of composition x. The resonance sensitivity becomes highly dependent on gold composition. The composition of gold around 0.1 is found to have a higher sensitivity. 0.1 composition of Au and 0.9 of Ag gives a high sensitivity. Consequently, the quality of the sensor is improved. Numerical simulations show that the angular sensitivity of the sensor reaches 100°/RIU and the full width at half maximum of the resonant dip is only 1.2°. Moreover, the sensor has not only a high sensitivity and a high resolution, but also a good linearity. ISSN 09731458 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 2015-06-20 Publisher Place New Delhi e-ISSN 09749845 Journal Indian Journal of Physics Volume Number 90 Issue Number 1 Page Count 6 Starting Page 125 Ending Page 130
{}
A UV-visible spectrophotometer has a minimum detectable absorbance of $0.02$. The minimum concentration of a protein sample that can be measured reliably in this instrument with a cuvette of $1cm$ path length is _________$\mu M$ . The molar extinction coefficient of the protein is $10,000\;L\:mol^{-1}\:cm^{-1}$.
{}
Chapter 4 - Test - Page 472: 19 $x=2$ Work Step by Step $\bf{\text{Solution Outline:}}$ To solve the given equation, $\log_2 x+ \log_2(x+2)=3 ,$ use the properties of logarithms to simplify the expression at the left. Then convert to exponential form and use the concepts of solving quadratic equations. Do checking of solution/s with the original equation. $\bf{\text{Solution Details:}}$ Using the Product Rule of Logarithms, which is given by $\log_b (xy)=\log_bx+\log_by,$ the equation above is equivalent \begin{array}{l}\require{cancel} \log_2 [x(x+2)]=3 .\end{array} Since $\log_by=x$ is equivalent to $y=b^x$, the equation above, in exponential form, is equivalent to \begin{array}{l}\require{cancel} x(x+2)=2^3 \\\\ x(x+2)=8 .\end{array} In the form $ax^2+bx+c=0,$ the equation above is equivalent to \begin{array}{l}\require{cancel} x(x)+x(2)=8 \\\\ x^2+2x=8 \\\\ x^2+2x-8=0 .\end{array} Using the FOIL Method which is given by $(a+b)(c+d)=ac+ad+bc+bd,$ the factored form of the equation above is \begin{array}{l}\require{cancel} (x+4)(x-2)=0 .\end{array} Equating each factor to zero (Zero Product Property), the solutions are \begin{array}{l}\require{cancel} x+4=0 \\\\\text{OR}\\\\ x-2=0 .\end{array} Solving each equation results to \begin{array}{l}\require{cancel} x+4=0 \\\\ x=-4 \\\\\text{OR}\\\\ x-2=0 \\\\ x=2 .\end{array} If $x=-4,$ the part of the given expression ,$\log_2 x ,$ becomes $\log_2 -4 ,$ which is not a real number. Hence, oly $x=2 ,$ satisfies the original equation. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
# How does full wheatstone bridge with strain gauges work? I have gone through a number of tutorials on Wheatstone bridge and how to use it to sense load using strain gauges. I have also gone through the calculations where one of the resistors of the Wheatstone bridge is a strain gauge and how the voltage we measure across a Wheatstone bridge is proportional to the change in the resistance of the strain gauge. But when we use the full bridge, the calculations do not go through, i.e, the output voltage is not proportional to sum of resistance change in all the strain gauges. How does the full Wheatstone bridge where all the four arms are strain gauges? Consider the following circuit from the tutorial page: $$$$E_a = \frac{E \times R_3}{R_1 + R_3}$$$$ $$$$E_b = \frac{E \times R_4}{R_2 + R_4}$$$$ $$$$\label{eq:diff} E_{ab} = E \left(\frac{R_3}{R_1 + R_3} - \frac{R_4}{R_2 + R_4}\right)$$$$ Let us assume the change in the resistance in each of the resistors is given by $$r_i, i \in \{1, 2, 3, 4\}.$$ Then above equation becomes, \begin{align*} E_{ab} &= E \left(\frac{R_3+r_3}{R_1+r_1 + R_3+r_3} - \frac{R_4+r_4}{R_2+r_2 + R_4+r_4}\right). \end{align*} If we further assume $$R_i = R, \forall i.$$ Then above equation becomes, \begin{align*} E_{ab} &= E \left(\frac{R+r_3}{R+r_1 + R+r_3} - \frac{R+r_4}{R+r_2 + R+r_4}\right)\\ &= E \left\{\frac{(R+r_3)(2R + r_2 + r_4) - (R+r_4)(2R + r_1 + r_3)}{(2R + r_1 + r_3)(2R + r_2 + r_4)} \right\}\\ &= E \left\{ \frac{2R^2 + Rr_2 + Rr_4 + 2Rr_3 + r_3r_2 +r_3r_4 - \left( 2R^2 +Rr_1 +Rr_3 + 2Rr_4 + r_4r_1 + r_4r_3\right)}{4R^2 + 2Rr_2 + 2Rr_4 + 2Rr_1 + r_1r_2+r_1r_4 +2Rr_3 + r_3r_2 + r_3r_4} \right\} \\ &= E \left\{ \frac{Rr_2-Rr_1 - Rr_4 + Rr_3 + r_3r_2 +r_3r_4 - r_4r_1 - r_4r_3}{4R^2 + 2Rr_2 + 2Rr_3 + 2Rr_4 + 2Rr_1 + r_1r_2+r_1r_4 + r_3r_2 + r_3r_4} \right\}\\ &=E \left\{ \frac{R(r_2-r_1 -r_4 + r_3) + r_3r_2 +r_3r_4 - r_4r_1 - r_4r_3}{4R^2 + 2R(r_2 + r_3 + r_4 + r_1) + r_1r_2+r_1r_4 + r_3r_2 + r_3r_4} \right\} \\ &=E \left\{ \frac{R(r_2-r_1 -r_4 + r_3)}{4R^2 + 2R(r_2 + r_3 + r_4 + r_1)} \right\} \quad \text{assuming r_ir_j is very small}\\ &=E \left\{ \frac{(r_2-r_1 -r_4 + r_3)}{4R + 2(r_2 + r_3 + r_4 + r_1)} \right\}\\ &=E \left\{ \frac{(r_2-r_1 -r_4 + r_3)}{4R} \right\} \quad \text{if r_i \ll R \forall i} \end{align*} The final equation is not just a simple $$f(\sum_ir_i)$$ which means the signal we get need not be directly proportional to the weight applied. 1: It will always be directly proportional to the weight applied. A strain gauge has a linear response, and elastic bending of a load cell is linear. Thus, we can write $$\ r_i = k_i F \$$ where $$\k_i\$$ is a constant of proportionality for that gauge, and \$F$ the force applied. If we substitute and rearrange, we can take out $$\F\$$ and the last line of your question becomes: $$\=E F \left\{ \frac{(k_2-k_1 -k_4 + k_3)}{4R} \right\} \$$ i.e. it is directly proportional to $$\F\$$. This still assumes $$\r_i \ll R \forall i\$$ of course. 2: You choose where to put the strain gauges The constant of proportionality above has a $$\(k_2-k_1 -k_4 + k_3)\$$ term. If all of these $$\k_i\$$ are the same, then it sums to zero. That isn't any use. To make the constant of proportionality large, we want $$\k_2\$$ and $$\k_3\$$ to be as large as possible, and $$\k_1\$$ and $$\k_4\$$ to be as negative as possible. Strain gauges with negative responses are hard to find, but the metal load cell will have some areas under tension, and some under compression. So we usually choose to mount two strain gauges in an area under compression, and the other two in an area under tension. The output is not linear with just one resistor changing. A bridge with two sensing arms, where one increases by $$\\Delta R\$$ and the other decreases by the same amount will be 100% linear in $$\\Delta R\$$. • Does this assume that the entire bridge occupies a very small area in space? In my setup I have four strain gauges which are at least 20cm away from one another. Jul 2, 2019 at 19:50 • It assumes that the two strain gauges undergo equal and opposite changes. Whatever it takes to do that, of course, needs to be done. It wasn't clear to me that you actually had a circuit in your question. Jul 2, 2019 at 20:00 • You may also need a pot to balance your bridge. Jul 2, 2019 at 20:01
{}
## Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No:  379.10027 Autor:  Erdös, Paul; Pomerance, Carl Title:  On the largest prime factors of n and n+1. (In English) Source:  Aequationes Math. 17, 311-321 (1978). Review:  The authors prove some interesting results which give a comparison of the largest prime factors of n and n+1. Let P(n) denote the largest prime factor of n. Then one of the impressive results proved is that the number of n \leq x for which P(n) > P(n+1) is >> x for all large x. Another of them is about numbers n for which f(n) = f(n+1) where by f(n) we mean sumpia_{i||n}a_· pi. Such numbers are called Aaron numbers. The authors prove that the number of Aaron numbers \leq x is O\epsilon(x(log x)-1+\epsilon). The results can find other attractive results in the body of the paper. Reviewer:  K.Ramachandra Classif.:  * 11N05 Distribution of primes 11N37 Asymptotic results on arithmetic functions 11A41 Elemementary prime number theory © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
{}
1. ## Determing the ratio of buyers who take up a multi-buy offer (i.e. redemption rate) Hi, My question is how do i dtermine the % of shoppers who took up a multi buy offer. So for example the product normally retails for $4.60 and the offer was buy 2 for$6.00 so if EVERYONE takes up the offer the sell price will be $2.99 right? But not everyone takes up the offer (some ppl prefer to just buy the one unit instead of two). In my data the average sell price for the week of the multi-buy promotion was$3.86 So how do i figure out the % of sales that were made were shoppers took up the multi-buy? Thanks! 2. ## Re: Determing the ratio of buyers who take up a multi-buy offer (i.e. redemption rate Originally Posted by agerrrard Hi, My question is how do i dtermine the % of shoppers who took up a multi buy offer. So for example the product normally retails for $4.60 and the offer was buy 2 for$6.00 so if EVERYONE takes up the offer the sell price will be $2.99 right? no$3.00 But not everyone takes up the offer (some ppl prefer to just buy the one unit instead of two). In my data the average sell price for the week of the multi-buy promotion was $3.86 So how do i figure out the % of sales that were made were shoppers took up the multi-buy? Thanks! Let$\displaystyle m$be the fraction of your$\displaystyle N$customers that took the multi-buy. Then the total sales were$\displaystyle m\times N\times 2+(1-m)\times N$and the total they paid was$\displaystyle m\times N \times 2 \times 3.00 + (1-m)\times N \times 4.60$. So the average price was the total received divided by the total sales:$\displaystyle p=\frac{m\times N \times 2 \times 3.00 + (1-m)\times N \times 4.60}{m\times N\times 2+(1-m)\times N}$Now the$\displaystyle N $s cancel and we are told that the average price paid was$\displaystyle \$3.86$ so: $\displaystyle p=\frac{m \times 2 \times 3.00 + (1-m) \times 4.60}{m\times 2+(1-m)}=3.86$ which you now solve for the (decimal) fraction of customers who took up the multi-buy, and multiply that by $\displaystyle 100$ to get the percentage. CB , , , , ### calculating promotion redemption rate , Click on a term to search for related topics.
{}
Building LaTeX papers with Tup PUBLISHED ON NOV 16, 2017 — MISC Building complex LaTeX documents can be quite painful. Thankfully, a little program called Tup can automate the build process for you. When writing a research paper, there are often two options for manuscript creation: Microsoft Word and LaTeX. In general I prefer to use LaTeX, because the idea of writing what you mean and allowing a powerful typesetting engine produce a beautiful document appeals to me. Furthermore, it is really useful being able to keep track of the manuscript with version control. However, being over 30 years old at this point, LaTeX is not without its warts. One of the nastiest aspects of LaTeX is the build process. In Microsoft Word, there really isn’t a build step at all—what you see is really what you get. If you need to submit the document somewhere, you can do so and be reasonably confident that the recipient will see the same thing that you do. LaTeX, in my experience, is a bit more complicated. Which LaTeX compiler command should be used? Which figures need to be built before compiling the main document? What are the dependencies between parts of the document? In this post we will look at setting up automated builds for LaTeX documents. Automatic LaTeX builds with Tup Let’s say that you have a project directory called my_paper/ with a simple LaTeX document called main.tex inside: \documentclass{article} \begin{document} My document is so awesome! \end{document} We can compile this document into a PDF by running the following command: $latexmk -pdf main.tex Introducing Tup Tup is a build system that fills a similar role to GNU Make, but has some nifty additional features. To get started with Tup, create an empty file called Tupfile.ini file beside main.tex so that Tup knows where the root of the project is, then run tup init. $ touch Tupfile.ini $tup init Tup build rules are defined in Tupfiles. Make a build/ directory, and create a file called Tupfile within it. : ../main.tex |> latexmk -pdf %f |> %B.pdf This Tupfile has a single rule which says “take main.tex, run latexmk on it, and we will get main.pdf”. We can trigger the build by running Tup without any arguments. $ tup In this case Tup produces the PDF, but does so begrudgingly—look at those angry errors! tup error: File XXX was written to, but is not in .tup/db. You probably should specify it as an output This is because latexmk writes to files other than main.pdf, and Tup needs to know about all outputs produced by a command. No problem, let’s add them in. : main.tex |> latexmk -pdf %f |> %B.pdf %B.fls %B.log %B.aux %B.fdb_latexmk Run tup again, and this time the errors should be gone. Typing tup every time we edit a file is kind of annoying, but fortunately we don’t have to. Tup can monitor relevant files for us, and rerun the appropriate rules when files change. $tup monitor -f -a Integration with Git version control Let’s add version control to our little project with Git. $ git init When we were using Tup earlier, it created a .tup folder to keep track of file states. We should tell Git to ignore this folder. $echo ".tup/" >> .gitignore Furthermore, we don’t want to commit our generated PDFs, logs, and whatever else to version control. Luckily, Tup makes it really easy to exclude generated outputs from Git. Simply add the .gitignore directive to Tupfile like so: .gitignore : ../main.tex |> latexmk -pdf %f |> %B.pdf %B.fls %B.log %B.aux %B.fdb_latexmk Now when we go to stage everything… $ git add . …we can see that the generated files in build/ are ignored by Git. \$ git status [...] Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: .gitignore new file: Tupfile.ini new file: build/Tupfile new file: main.tex We can add as many rules as we like to our Tupfile. For example, say we want to add a plot of a sine wave. Create a directory called figures/ with a file called sine.gnuplot inside. set terminal cairolatex input pdf size 8cm,6cm set output 'sine.tex' plot sin(x) Add a new rule to build/Tupfile for compiling the Gnuplot figure. We will make the rule general, so that Tup can compile any .gnuplot file added to the figures/ directory. We will also group the outputs into a group called <figs>. We then add the <figs> group as an order-only input for the main.tex rule, so that Tup knows that the plots should be built before the main document. .gitignore : foreach ../figures/*.gnuplot |> gnuplot %f |> %B.tex %B.pdf <figs> : ../main.tex | <figs> |> latexmk -pdf %f |> %B.pdf %B.fls %B.log %B.aux %B.fdb_latexmk Finally, we need to actually embed the plot in the main document, main.tex. Note that we refer to the sine.tex as if it is in the current directory, since all building occurs in build/. \documentclass{article} \usepackage{graphicx} \begin{document} My document is so awesome! \input{sine.tex} \end{document} Now if you edit either main.tex or sine.gnuplot, the PDF will be automatically rebuilt. Coupled with a nice PDF reader like Evince that refreshes when the document changes, you have the output displayed instantly as you work. Adding a bibliography works pretty much how you would expect. references.bib @article{awesome, author = {Doe, John}, title = {How awesome things come to exist}, } main.tex \documentclass{article} \usepackage{graphicx} \begin{document} My document is so awesome \cite{awesome}! \input{sine.tex} \bibliography{../references} \bibliographystyle{ieeetr} \end{document} build/Tupfile .gitignore : foreach ../figures/*.gnuplot |> gnuplot %f |> %B.tex %B.pdf <figs> : ../main.tex | <figs> |> latexmk -bibtex -pdf %f \ |> %B.pdf %B.fls %B.log %B.aux %B.fdb_latexmk %B.blg %B.bbl Conclusion There you have it, a nice way of setting up automatic LaTeX builds. Building the document from a fresh clone of the Git repository is as easy as running tup init && tup!
{}
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 4-1109-1.1-c1e2-0-0 $0.515$ $0.0707$ $4$ $1109$ 1.1 $$1.0, 1.0 1 1 0 2.25941 Genus 2 curve 1109.a 4-1109-1.1-c1e2-0-1 0.515 0.0707 4 1109 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $2.82389$ Genus 2 curve 1109.c 4-1109-1.1-c1e2-0-2 $0.515$ $0.0707$ $4$ $1109$ 1.1 $1.0, 1.0$ $1$ $1$ $0$ $3.14611$ Genus 2 curve 1109.b
{}
Higher "Cartan-Eilenberg" Resolutions - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T20:06:21Z http://mathoverflow.net/feeds/question/89598 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/89598/higher-cartan-eilenberg-resolutions Higher "Cartan-Eilenberg" Resolutions Jason Polak 2012-02-26T19:15:04Z 2012-05-22T18:37:08Z <p>I am a number theory graduate student learning a bit of homological algebra, and I am curious about higher complexes in abelian categories. I apologize if my post is slightly vague as I am not an expert in this area. I will use capital roman letters to denote objects or complexes, but the usage is clearly stated.</p> <h2>Motivation</h2> <p>To motivate my question, let us start from a single object $M$ in an Abelian category $\mathcal{C}$. If $\mathcal{C}$ has enough projectives, we can form a projective resolution $P\to M$ of $M$, and apply a right exact additive functor $F:\mathcal{C}\to\mathcal{D}$ to $P$, and calculuate homology. Here $\mathcal{D}$ is just some other abelian category. This will give us the derived functors of $F$, and is a standard and well-known construction.</p> <p>Of course, it doesn't stop there. In the same abelian category $\mathcal{C}$ with enough projectives, any <em>chain complex</em> $M$ also has a (left) Cartan-Eilenberg resolution $P\to M$. Recall $P$ is an upper half plane double complex and the map $P\to M$ is just a chain map $P_{\bullet,0}\to M_\bullet$. Finally, $P$ is required to satisfy some axioms making it into a sort of 2-dimensional version of a projective resolution. I won't go into detail because this is also fairly standard.</p> <p>The point is that we can also apply a right-exact functor $F$ to this double complex $P$ and take the the homology of the total direct-sum complex of $P$ (if it exists!); that is $H_i(Tot^\oplus(FP))$, to get the <em>hyperderived</em> functors of $F$.</p> <h2>The Question</h2> <p>It seems as though there is a natural generalization. One can easily define an $n$-complex in an analogous fashion to $2$-complexes. Higher dimensional complexes don't really show up much as far as I can tell, although I believe in Cartan-Eilenberg a $4$-complex is used somewhere (sorry, I don't have the book with me!).</p> <p>So I suppose my question is:</p> <blockquote> <p>Suppose $\mathcal{C}$ is an abelian category with enough projectives. Is it true that for any $n$, an $n$-complex $M$ has some appropriate higher Cartan-Eilenberg resolution (which would be an $n+1$-complex)?</p> </blockquote> <p>Appropriate means that if $P\to M$ is this hypothetical higher Cartan-Eilenberg resolution, then applying a right exact additive functor $F$ to $P$ and taking the homology of the total direct-sum (if it exists) complex gives the "correct" notion of $n$-hyperderived functors.</p> <h2>Comments</h2> <p>I have searched the literature for this concept but I could not find anything relevant. I am thinking that there are two possibilities (a) yes, higher Cartan-Eilenberg resolutions exists and are interesting, or (b) yes, higher Cartan-Eilenberg resolutions exist but don't capture any new information and so are not that interesting. I'd be a bit surprised if they <em>don't</em> exist but I do not have enough experience in homological algebra to understand the bigger picture here.</p> <p>Also, we could have phrased this question in terms of injectives and (right) Cartan-Eilenberg resolutions.</p> <p>Thanks</p> http://mathoverflow.net/questions/89598/higher-cartan-eilenberg-resolutions/89641#89641 Answer by Ralph for Higher "Cartan-Eilenberg" Resolutions Ralph 2012-02-27T06:28:38Z 2012-02-27T06:28:38Z <p>The process can certainly be iterated as explained by Marc (see also Weibel, Homological Algebra, 1.2.5. Moreover cf. 1.2.3, 2.2.2 for the fact that the category of chain complexes over an abelian category with enough projectives is again an abelian category with enough projectives). </p> <p>However, it seems to me that it isn't often used. A reason might be, that in many (most ?) cases one isn't interested in a double complex (or higher dimensional analogs) itself but in the (co)homology of of its total complex (the definition of hyperderived functors in your question is an example for this point of view).</p> <p>However, in this case there is no need to jump into a higher dimension to define a projective resolution. For, there is an alternative definition for the projective resolution of a chain complex that is - in my opinion - much more elegant and easier to work with than with Cartan-Eilenberg's definition: </p> <blockquote> <p>A projective resolution of the chain complex $C$ is a complex $P$ of projectives together with a quasi-isomorphism $f: P \to C$ (i.e. $f$ is a chain map such that $H_n(f): H_n(P) \to H_n(C)$ is an isomorphism for all $n$). </p> </blockquote> <p>Note that such a $P$ is in general no projective object in the category of chain complexes, but it yields the same hyperderived functors, hypercohomology spectral sequences, etc. For a textbook reference of this definition see for example </p> <ul> <li>McCleary, A User's Guide to Spectral Sequences (before Theorem 12.12) </li> <li>Benson, Representations and Cohomology I, Definition 2.7.4</li> </ul>
{}
# Solution to Practice Problem #6 This is a hard problem, and the solution below is advanced. I need to teach it properly only after building correct foundations. Question: A father is three times as old as his son was at the time when the man was twice as old as his son will be in two years. What are the current ages of the father and son if the sum of their ages now is 55 years. \begin{array}{c} \eqalign{ & F = 3\left( {S – t} \right) \cr & \cr & F – t = 2\left( {S + 2} \right) \cr & \cr & F + S = 55 \cr & \cr & F – 3S + 3t = 0 \cr & F – 2S – t = 4 \cr & F + S + 0 = 55 \cr & \cr & rref\left[ {\matrix{ 1 & { – 3} & 3 & 0 \cr 1 & { – 2} & { – 1} & 4 \cr 1 & 1 & 0 & {55} \cr } } \right] \cr & \cr & = \left[ {\matrix{ 1 & 0 & 0 & {39} \cr 0 & 1 & 0 & {16} \cr 0 & 0 & 1 & 3 \cr } } \right] \cr} \end{array} The father is 39 years old, the son is 16 years old, and the time span is 3 years.
{}
💬 👋 We’re always here. Join our Discord to connect with other students 24/7, any time, night or day.Join Here! Numerade Educator # Coffee is being poured into the mug shown in the figure at a constant rate (measured in volume per unit time). Sketch a rough graph of the depth of the coffee in the mug as a function of time. Account for the shape of the graph in terms of concavity. What is the significance of the inflection point? ## Point $A$ is an inflection point and is significant because the rate of increase in depth changes from increasing to decreasing (depth still increasing but at a slowing rate). Derivatives Differentiation Volume ### Discussion You must be signed in to discuss. Lectures Join Bootcamp ### Video Transcript So coffee is being poured into the moat shown in the figure. So the figure is right here and what we're being asked immeasurable, impertinent time. And we're being asked to sketch a graph. Ah, rough graph of the depth of the coffee in the mug at the function of time and account for the shape of the graph in terms of Khan. Captain. So what is the significance of this inflection point? So this is a kind of a rough sketch of the pitch to coffee that's in a textbook, but you can see that it is very wide at the bottom and then gets narrow and then gets right to get so they're just kind of a general idea of what graph looks like. So this is going to be the exact that's going to depth. So they get sick, and then the Y axis will mean the Y axis will be dead in the exact it will be time. So when we first begin point coffee at a constant rate, the debt won't The death won't be ah, increasing. It's much quicker this wide, and this takes it takes more coffee to fill up so's less of being achieved, and then it's going to sort of increase as they get into the more narrow range. It gets more into the narrow aged increases really fast, and then it gets wide again and sort of does the opposite, and that is what happened to forget. And so we have an inflection points. But this is Kong cave up, and then we have can't keep down the inflection point somewhere in the middle. And so the significance of the inflection point is that at this point, the coffee eyes being poured faster at any other point. So it is being filled up. The the height or the depth of the coffee in the monk is increasing at a faster rate than any other point in the coffee mug. And that's really the important. That's really the interpretation for the inflection point. And that's really what widen what's really need about inflection. Point of telling us about the fastest rate at which it is, um, tells us a lot about different situation in this term tells us that there rated the coffees. The depth of the company is increasing at a factory than any other point Derivatives Differentiation Volume Lectures Join Bootcamp
{}
# Solution to Klein-Gordon equation always valid? We know that there is a relativistic version of Schrodinger equation called Klein-Gordon equation. However, it has some problems and due to these problems, there is Dirac equation that handles these problems. So, the question is, if there is a solution that is allowed by Klein-Gordon, but not by Dirac, can this solution be considered valid? Also, can Dirac equation be used for spin-0 particles? - see arxiv.org/abs/quant-ph/0307059 for a solution to the problems of the KG-equation – Christoph Oct 8 '12 at 11:41 ## 1 Answer The vacuum Dirac equation automatically implies the Klein-Gordon equation. It means that every solution to the vacuum Dirac equation is automatically a solution to the Klein-Gordon equation. The converse of course doesn't hold. The most basic reason is that the Klein-Gordon equation should really act on scalars, a single bosonic field, while the minimum number of components for the $d=4$ Dirac equation is four (and they should be fermionic fields). So a general (or generic) valid solution to the Klein-Gordon equation is a valid solution to the Klein-Gordon equation (this much is a tautology, but you were asking about it), but it is not a solution to the Dirac equation. Even if you combine 4 solutions to the Klein-Gordon equation, declare that they are 4 components of a Dirac spinor, and ask whether they solve the Dirac equation, the answer is No. It's because the Dirac equation is really "stronger" than the Klein-Gordon equations for its components. Effectively, the Dirac equation is first-order while the Klein-Gordon equation is second-order. The Dirac equation implies certain correlations between the spin (up/down) of the particle and the sign of the energy (positive/negative). The quadruplet of Klein-Gordon equations allows all combinations of spin up/down and the sign of the energy. However, the most general quadruplet of solutions to the Klein-Gordon equation may be written as a solution of the Dirac equation with a positive mass and a solution to the Dirac equation with a negative (opposite) mass. The Dirac equation describes spin-1/2 (and therefore "fermionic") particles such as electrons, other leptons, and quarks, while the Klein-Gordon equation describes spin-0 "scalar" (and bosonic) particles such as the Higgs boson. However, before they do the proper job, the "wave functions" have to be promoted to full fields and these fields have to be quantized. -
{}
# Conditional probability of being in a certain group of students Assume that the class consists of 45 percent freshmen, 5 percent sophomores, 40 percent juniors, and 10 percent seniors. Assume further that 45 percent of the freshmen, 40 percent of the sophomores, 20 percent of the juniors, and 30 percent of the seniors plan to go to medical school. One student is selected at random from the class. (1) What is the probability that the student plans to go to medical school? (2) If the student plans to go to medical school, what is the probability that he is a junior? ### Progress I keep getting $.5125$ for number one and can't figure out why the computer system keeps telling me it's wrong. I really need help on this. Thank you in advance for your help! You asked about the first question, so I'll respond to that one only. The second one is a different question. Use what you know about conditional probability for that one. It's a good idea to always do a sanity check on your result. This is easy to do with probabilities. For (1), your result is 0.5125. Note that this corresponds to 51.25%. But none of the groups of students want to go to medical school that badly! The largest proportion of wannabe doctors is among the freshmen --- 45%. It's lower for the other groups. So if you pick one student out of the high school, then the probability of picking a student that wants to go to medical school can't be more than 45%. So 51% is way too high, so it's wrong. The right way to do that computation is using the linearity of expectation. Read about the details. What it boils down to is that you'll want to multiply and add the probabilities like this: \begin{align} P = &(\text{probability student is a freshman}) \cdot (\text{probability freshman wants to go to medical school})\\ &+ (\text{probability student is a junior}) \cdot (\text{probability junior wants to go to medical school})\\ &+ \ldots\\ &+ (\text{probability student is a senior}) \cdot (\text{probability senior wants to go to medical school}). \end{align} So: $P = 0.45 \cdot 0.45 + 0.05 \cdot 0.40 + 0.40 \cdot 0.20 + 0.10 \cdot 0.30 = 0.3325$ Perform your own sanity check. Does this look reasonable? Look at the proportions of the students. 40% the students are juniors, but only 20 percent of them want to go to medical school. Otherwise, 45% of the students are freshmen, 45% of whom want to go to medical school. Those are 85% of the students. The average percent that wants go to medical school is somewhere in the low 30's. The other 15% want to go to medical school between 30 and 40%. So a low-30 answer can be expected. So $0.3325 = 33.25%$ sounds reasonable. to be comprehended, I'll introduce $A1$ the event "He is a freshman", $A2$ "He is a sophomore", $A3$ "He is a junior", $A4$ "He is a senior" and $X$ "he goes to medical school". Informations you have : $P(X\vert A1)=0.45$, $P(X\vert A2)=0.4$, $P(X\vert A3)=0.2$ and $P(X\vert A4)=0.3$. And by the way, you know $P(A1)$, $P(A2)$, $P(A3)$ and $P(A4)$. You first question is to determine $P(X)$, you can then use : $P(X)=\sum\limits_{i=1}^{4}P(X\vert Ai)P(Ai)$ For you second question, you have to determine $P(A3\vert X)$ and for this, you can use the Bayes' theorem (or formula).
{}
# Golden section inserted Algebra Level 5 What is the fundamental period of the continuous non-zero function $$f$$ satisfying $f(x+1)+f(x-1)=\dfrac{\sqrt{5}+1}{2} \cdot f(x)$ ×
{}
# Measuring the Higgs CP Property through Top Quark Pair Production at Photon Linear Colliders ArticleinPhysical Review D 62(11) · May 2000with4 Reads DOI: 10.1103/PhysRevD.62.115005 · Source: arXiv Abstract We present a model-independent study of the effects of a neutral Higgs boson without definite CP-parity in the process $\gamma\gamma \to t\bar{t}$ around the mass pole of the Higgs boson. Near the resonance pole, the interference between the Higgs-exchange and the continuum amplitudes can be sizable if the photon beams are polarized and helicities of the top and anti-top quarks are measured. Study of these interference effects enables one to determine the CP property of the Higgs boson completely. An example of the complete determination is demonstrated in the context of the minimal supersymmetric standard model. Comment: 17 pages, LaTeX file with 1 eps figure • ##### ECFA/CERN studies of a European Neutrino Factory Complex Article · · Physical Review D +11 more authors ... • ##### Photon collider at TESLA [Show abstract] [Hide abstract] ABSTRACT: High energy photon colliders () based on backward Compton scattering of laser light is a very natural addition to e+e− linear colliders. In this report, we consider this option for the TESLA project. Recent study has shown that the horizontal emittance in the TESLA damping ring can be further decreased by a factor of four. In this case, the γγ luminosity in the high energy part of spectrum can reach about (1/3)Le+e−. Typical cross-sections of interesting processes in γγ collisions are higher than those in e+e− collisions by about one order of magnitude, so the number of events in γγ collisions will be more than that in e+e− collisions. Photon colliders can, certainly, give additional information and they are the best for the study of many phenomena. The main question is now the technical feasibility. The key new element in photon colliders is a very powerful laser system. An external optical cavity is a promising approach for the TESLA project. A free electron laser is another option. However, a more straightforward solution is “an optical storage ring (optical trap)” with a diode pumped solid state laser injector which is today technically feasible. This paper briefly reviews the status of a photon collider based on the linear collider TESLA, its possible parameters and existing problems. Article · Nov 2000 • ##### Neutralino-Nucleus Elastic Cross Section in the Minimal Supersymmetric Standard Model with Explicit CP Violation [Show abstract] [Hide abstract] ABSTRACT: We study the elastic scattering of the lightest neutralino with a nucleus in the framework of the minimal supersymmetric standard model (MSSM) with explicit flavor preserving CP violation, including the one-loop CP-violating neutral Higgs-boson mixing effects induced dominantly by the CP phases in the top and bottom (s)quark sectors. We construct the most general form of the effective Lagrangian for the neutralino-nucleus scattering in the limit of vanishing momentum transfers and then we perform a comprehensive analysis of the effects of the complex CP phases on the mass spectra of the lightest neutralino, neutral Higgs bosons and top squarks, and on the the spin-dependent and spin-independent neutralino-nucleus scattering cross section for three neucleus targets F, Si and Ge. The CP phases can reduce or enhance the neutralino-nucleus cross sections significantly, depending on the values of the real parameters in the MSSM. Comment: LaTex file of 26 pages with 6 eps figures Full-text · Article · Dec 2000
{}
# Finding the most common section from the visible cells in a collection view My goal is to determine, of all of the currently visible cells in a collection view, which section has the most visible cells. Start by getting the index paths for the visible cells: let visible = collectionView.indexPathsForVisibleItems // [IndexPath] Originally I used an NSCountedSet followed by mapping and sorting. let counted = NSCountedSet(array: visible.map { $0.section }) let section = counted.allObjects.map { ($0, counted.count(for: $0)) }.sorted {$0.1 > $1.1 }.first?.0 as? Int This works but I don't like NSCountedSet because it deals with Any. I then replaced the use of NSCountedSet with reduce by reducing the visible array into a dictionary and then using a similar map and sort. let counted = visible.reduce([Int: Int]()) { (result, path) -> [Int: Int] in var updated = result updated[path.section, default: 0] += 1 return updated } let section = counted.keys.map { ($0, counted[$0]!) }.sorted {$0.1 > $1.1 }.first?.0 This works as well but I'm hoping there are ways to improve this. Two main questions: 1. Can the reduce closure be improved? Is there a better way to return the updated dictionary where the keys are section and the values are the count? 2. Once I have the dictionary of sections and counts, is there a better way to find the section with the highest count? Is there something better than mapping to a tuple, sorting those tuples, and finally grabbing the first one? BTW - If there is a tie for highest count, I don't care which of those sections is returned. ## 1 Answer Re 1: You can take advantage of reduce(into:_:) which was introduced in Swift 4 precisely for this purpose: This method is preferred over reduce(_:_:) for efficiency when the result is a copy-on-write type, for example an Array or a Dictionary. See also SE-0171 Reduce with inout: Motivation The current version of reduce needs to make copies of the result type for each element in the sequence. The proposed version can eliminate the need to make copies (when the inout is optimized away). In your case: let counted = visible.reduce(into: [Int: Int]()) { (result, path) in result[path.section, default: 0] += 1 } The compiler can also infer the type of the initial accumulator automatically if the closure consists only of a single statement: let counted = visible.reduce(into: [:]) { (result, path) in result[path.section, default: 0] += 1 } Re 2: A dictionary is a collection of key/value pairs, therefore it can be sorted directly, without mapping each key to a tuple first: let section = counted.sorted(by: {$0.value > $1.value }).first?.key Even better, use max(by:) to find the dictionary entry with the maximal value: let section = counted.max(by: {$0.value < $1.value })?.key This is shorter and eliminates the intermediate arrays and the dictionary lookup. It is more efficient than sorting an array because only a single traversal of the collection is done. And even if the forced unwrapping is safe in your case, it is nice not to have it. One could also make this a generic method for sequences extension Sequence { func mostFrequent<T: Hashable>(by map: (Element) -> T) -> T? { let counted = reduce(into: [:]) { (result, elem) in result[map(elem), default: 0] += 1 } return counted.max(by: {$0.value < $1.value })?.key } } which is then used as let section = collectionView.indexPathsForVisibleItems.mostFrequent(by: {$0.section }) • Excellent. That reduce(into:) was just what I needed to make that part cleaner. And I had been thinking about how to use max but I forgot about being able to use it like this with a dictionary. Great info. The extension is a great touch too. – rmaddy May 15 '19 at 21:37
{}
# Implementation of merge sort algorithm in C++ This is my implementation of "merge sort" in C++. I'm still a newbie at C++ so: • Have I understood how merge sort works and implemented it in the right way? • Have I used any bad practices? • How could the code be improved upon in terms of efficiency? Criticism is welcome and appreciated! #include <iostream> #include <vector> #include <string> #include <sstream> using namespace std; vector<float> merge(vector<float> firstHalf, vector<float> secondHalf){ vector<float> combined; for(int i = firstHalf.size() + secondHalf.size(); i > 0; i--){//merge two vectors if(firstHalf.back() > secondHalf.back() && !firstHalf.empty()){ combined.push_back(firstHalf.back()); firstHalf.pop_back(); }else if(!secondHalf.empty()){ combined.push_back(secondHalf.back()); secondHalf.pop_back(); } } vector<float> revCombined;//reverse merged vectors. Vectors don't have pop_front and I didn't want to use lists. for(int i = 0; i < combined.size(); i++){ revCombined.push_back(combined[combined.size()-i-1]); } return revCombined; } vector<float> mergeSort(vector<float> &inputArray){//for example [9, 8, 1] as input if(inputArray.size() > 1){ vector<float> firstHalf; vector<float> secondHalf; for(int i = 0; i < inputArray.size()/2; i++){//auto round the input array because size() returns int firstHalf.push_back(inputArray[i]); }//first half = [9, 8] for(int i = inputArray.size()/2; i < inputArray.size(); i++){ secondHalf.push_back(inputArray[i]); }//second half = [1] return merge(mergeSort(firstHalf), mergeSort(secondHalf)); } else{ return inputArray; } } vector<string> split(string str, char delimiter) { vector<string> internal; stringstream ss(str); // Turn the string into a stream. string tok; while(getline(ss, tok, delimiter)) { internal.push_back(tok); } return internal; } vector<float> floatVectorInput(){ string inputString; getline(cin, inputString); vector<string> stringArray = split(inputString, ' '); vector<float> array; for(int i = 0; i < stringArray.size(); i++){ array.push_back(stof(stringArray[i])); } return array; } int main(){ cout << "Array to sort (separate by spaces): " << endl; vector<float> inputArray = floatVectorInput(); vector<float> sorted = mergeSort(inputArray); cout << endl << "Sorted Array:" << endl; for(int i = 0; i < sorted.size(); i++){ cout << sorted[i]; if(i == sorted.size()-1){ cout << endl << endl; }else{ cout << ", "; } } return 0; } • Have you tested your code? To me it seems that the check if(firstHalf.back() > secondHalf.back() && !firstHalf.empty()) will cause a segfault. Also, you keep creating and moving a lot of vectors, perhaps it is better to just keep one scratchpad vector and do the merging on it. Finally, you usually sort a vector by modifying it rather than creating another vector with elements sorted. – Raziman T V Jan 21 '17 at 17:26 • Doesn't cause a Segmentation Fault for me – user2635139 Jan 21 '17 at 17:34 • @RazimanT.V. The code seems to work correctly. You can play with it (and various inputs) here: ideone.com/rWbAsl – πάντα ῥεῖ Jan 21 '17 at 17:37 • The code performs back() on provably empty vectors. That is undefined behaviour and just waiting for the right input to crash. – Raziman T V Jan 21 '17 at 17:40 • What version of C++ are you using? If it is c++11 or c++14, please add the relevant tags to your question. – 5gon12eder Jan 21 '17 at 18:15 ## merge calls back() on empty vector The logic in your merge function has a bug. if (firstHalf.back() > secondHalf.back() && !firstHalf.empty()) { … } The first problem is that you access firstHalf.back() before the check that !firstHalf.empty(). The checks should be in reversed order. The second problem is that you're also accessing secondHalf.back() without checking whether !secondHalf.empty(). While the code could be fixed with less changes, I suggest that you look at the problem again: Merging only really makes sense if neither of the halves is empty. So let's break the problem into two parts: 1. Merge two non-empty containers. 2. Append the remaining elements to the already merged container. std::vector<float> combined{}; // Merge two non-empty containers. while (!firstHalf.empty() && !secondHalf.empty()) { if (firstHalf.back() > secondHalf.back()) { combined.push_back(firstHalf.back()); firstHalf.pop_back(); } else { combined.push_back(secondHalf.back()); secondHalf.pop_back(); } } // Append the remaining elements to the already merged container. while (!firstHalf.empty()) { combined.push_back(firstHalf.back()); firstHalf.pop_back(); } while (!secondHalf.empty()) { combined.push_back(secondHalf.back()); secondHalf.pop_back(); } This version is not only correct but also simpler to understand. ## Input does not handle excess spaces correctly If you feed your program with input that puts more than one space between two numbers, your program will crash. While it is a legitimate choice to require exactly one space between subsequent numbers, it turns out that you can make the behavior more user-friendly and simplify the code at the same time. std::vector<float> floatVectorInput() { std::vector<float> numbers{}; std::string line{}; std::getline(std::cin, line); std::istringstream iss{line}; float x; while (iss >> x) { numbers.push_back(x); } if (!iss.eof()) { throw std::runtime_error{"Garbage input"}; } return numbers; } Instead of first splitting into a vector of strings, I'm reading the floats from the std::istringstream directly. This will skip over any amount of white-space as desired. The !iss.eof() check is there because I want to make sure that we stop because the input is exhausted, not because there was something that is not a floating-point number. ## Avoid using namespace std I know that many C++ tutorials want you to put using namespace std; into every file. I don't think that this is good advice and recommend against doing it. Importing namespaces is a complex operation and you probably don't understand all its implications. For example, what will the following code print with and without the using namespace std;? #include <iostream> #include <utility> //using namespace std; void swap(double x, double y) { std::clog << "swap(" << x << ", " << y << ")\n"; } int main() { int a = 1; int b = 2; swap(a, b); swap(3, 4); } With the offending line commented out, the code will print swap(1, 2) and swap(3, 4) as you'd probably expect. However, with using namespace std, it will only print the second line. What happened? By using namespace std, we've made std::swap (defined in <utility>) visible. Since our own swap takes doubles as parameters, calling it with an argument of type int is not a perfect match. What the C++ compiler does is adding an implicit conversion from int to double. However, if there is also a function that doesn't require this conversion to happen, the compiler will prefer it. It just so happens that std::swap is a template that will be a better match in this case. So why is only the first call resolved to std::swap then? This is because std::swap takes its arguments as mutable references. A temporary object (like an integer literal in this case) doesn't bind to a mutable reference. I understand that this is complicated stuff for a beginner and you probably shouldn't have to worry about it at this point. But if you're using namespace std, you'll have to know it or you won't understand your code. That said, using namespace std is also frowned upon in production-quality code (written by people who ought to understand the aforementioned language rules) so you're teaching yourself a bad habit by using it. Just be explicit and prefix symbols from the standard library with std::. It will tell the reader at a glance where the symbol comes from which makes understanding the code easier for readers of any level of experience. ## Be const correct Your mergeSort function doesn't modify its argument (sorts the vector in-place) but actually returns a new, sorted vector. Therefore, make its argument const. std::vector<float> mergeSort(const std::vector<float>& inputArray); // ^^^^^ You should do this with any variable that you don't intend to modify. ## Use the correct integer types This comment is actually wrong. auto round the input array because size() returns int std::vector::size is of type std::vector::size_type which is probably an alias for std::size_t. In any case, it is an unsigned integer type. Compiling your code with warnings enabled will warn you about this mixing of signed and unsigned types. Unfortunately, std::size_t is an unsigned integer for historic reasons. This meas that you cannot let inidces go below zero which makes the loop conditions a bit more complicated in some places. ## Make double your go-to floating-point type Unless you have a good reason to use float, prefer double for its greater range and precision. On modern hardware, you'll hardly notice a performance difference. That's not to say there are no valid use-cases for float. But unless you have such a case, your default-choice should be for double. ## Prefer '\n' over std::endl by default std::endl does more than outputting a new line. It also flushes the output stream. Which is an expensive operation. Sometimes, this is just what you want. For example, in this line, cout << "Array to sort (separate by spaces): " << endl; you've used it correctly. The text must be visible before the instruction on the subsequent line gets executed or the program might start blocking for user input before the unlucky user was asked to provide some. However, in all other places in your program, there is no need to flush. And output flushing is slow. It probably doesn't make any measurable difference for your small program but I've seen code where replacing std::endl with '\n' literally made the code ten times faster! And as a bonus: it's also less typing. ## Consider using auto type deduction (C++11) C++11 introduced a great feature: auto type deduction. This means that if you're declaring a variable and immediately initialize it with some value, you don't have to spell out its type. The compiler will deduce it for you. So, for example, you could replace vector<float> sorted = mergeSort(inputArray); by auto sorted = mergeSort(inputArray); and the compiler will figure out for you that sorted is of type vector<float>. Using auto systematically can help you avoid repeating the same information over and over again and also avoid doing some silly mistakes that happen when you think you know the type. (Applying the earlier advice, we should use const auto here, of course.) ## Avoid index-based for loops (C++11) You're using C-style for loops throughout your code. While they are not wrong, they're less readable than ideal and allow you to make some avoidable bugs with the index bookkeeping. In some cases, they might also perform worse than ideal. Since C++11, you can iterate over any standard library container using the following idiom, known as range-based for loop. Before: for (int i = 0; i < stringArray.size(); i++) { array.push_back(stof(stringArray[i])); } After: for (auto&& s : stringArray) { array.push_back(stof(s)); } Notice how I am using auto type deduction here. The type of s will be deduced to std::string& here but you don't necessarily have to worry about it. You can always use auto&& if you don't care about the actual type. Prefer const auto& if you don't intend to modify the object, though. Don't use plain auto as it will make an unnecessary copy. ## Prefer standard library algorithms over raw loops (intermediate level) The previous item notwithstanding, you should try to avoid raw loops in your code at all, if possible. (Search for Sean Parent's great “Better Code” talks and in particular for his “no raw loops” advice.) For example, the loop from above could become this. std::transform( std::begin(stringArray), std::end(stringArray), std::back_inserter(array), [](const std::string& s){ return std::stof(s); } ); I understand that this might not be very easy to understand for a beginner and it is perfectly acceptable to write your own loops at this point. But once you become more familiar with C++, standard algorithms are definitely something to look into. (Taking this further, production code would of course use the standard library's std::stable_sort algorithm instead of rolling its own merge sort implementation. But implementing algorithms yourself for learning is a good exercise.) ## Make the code generic (intermediate level) Your sorting logic can currently only be used for sorting std::vector<float>. You could generalize it to work with any container of any type that defines a comparison function. This is not a defect in your code. If you only have to sort std::vector<float>, that's fine. In fact, it is better to write specific code that is correct than to write generic code that is flawed. Once you've learned about templates and move semantics, you can attempt to make your code generic. ## Avoid unnecessary copies I liked this comment in your code. Reverse merged vectors. Vectors don't have pop_front and I didn't want to use lists. Indeed, using lists would be a bad idea here (and almost everywhere else, too). However, there is no need to create another vector. For one, you can reverse the elements of a vector in-place. In this case, however, an even better solution would be to use a std::deque (which has pop_front) instead of a std:vector. Of course, you could also re-work your code such that you don't actually pop the items off the front of your vector but merely iterate over it. There are some other places where your code makes copies that could be avoided. For example, the splitting into the firstHalf and secondHalf vectors isn't necessary. You could instead always pass a reference to the original vector and a pair of indices. Alternatively (and preferably) use iterators. An expert would probably also try to avoid the allocations for the merged intermediary results and instead use a single working array. But that is probably too complicated for now. ## The naming could be improved slightly Your code is generally very readable but at some places, I'd suggest looking carefully at the chosen names. For example, internal doesn't really tell me anything useful. More as a matter of taste, I'd avoid names like stringArray in favor of strings. Especially since it is a std::vector and not a std::array. ## Use tools to detect easy bugs Some of the issues pointed out in this review could easily have been found by tools. The most important tool is your compiler. Always compile with a high level of warnings enabled and treat them as errors. For GCC and Clang, I recommend you use the -Wall, -Wextra, -Werror and-pedantic options. There are even more warnings available but these should cover a good part. If your C++ implementation supports it, build debug builds with a “checked STL”. That is, a modified standard library that performs more checking than required (or even allowed) by the standard. When using GCC, this is really simple. All you have to do is compile with the -D_GLIBCXX_DEBUG option. This cannot find all issues, however. So in addition, I recommend that you compile your code with a sanitizer. GCC and Clang both support the -fsanitize=TOOL flag where TOOL is the name of the sanitizer to use. I recommend that you use address and undefined routinely. Using these switches will instrument your binaries with some run-time checks. Alternatively – or, preferably, in addition – you can use Valgrind which is a tool to instrument your binaries on the fly. Instrumentation of your code is only useful, however, if you have appropriate tests that execute the instrumented code. In your case, testing would be easy. Just write a function that calls mergeSort with a number of interesting inputs. “Interesting” in this case might be: an empty vector, a vector of a single element, a sorted vector, a vector sorted in reverse, a vector with duplicate elements, a random vector, etc. Of course, the test should also verify that the output is sorted correctly. Re-run these tests whenever you make modifications to your code to be alerted of newly introduced bugs immediately. • Have I understood how merge sort works and implemented it in the right way? You understood the algorithm correctly, and your code yields the desired results. • Have I used any bad practices? • How could the code be improved upon in terms of efficiency? See my further points below please: ## 1. Don't use using namespace std; While that would work in your particular case, it's considered bad practice. Especially when you move out your code to separate header files. See more details here please: Why is “using namespace std;” considered bad practice? ## 2. Check constraints in order This code may call undefined behavior, if the constraints aren't checked first (logical boolean arithmetic operations are executed in order): if(firstHalf.back() > secondHalf.back() && !firstHalf.empty()){ Since there's the possibility calling std::vector::back() with an empty vector, that code should be if(!firstHalf.empty() && !secondHalf.empty() && firstHalf.back() > secondHalf.back()){ to avoid calling the offensive statements ## 3. Simplify your data input processing Your data input function can be simplified a lot. You don't need another split() function to do this. Simply use a std::istringstream to do this: vector<float> floatVectorInput(){ string inputString; getline(cin, inputString); vector<float> array; std::istringstream iss(inputString); float val; while(iss >> val){ array.push_back(val); } return array; } ## 4. Prefer loops and dynamically allocated stacks over recursion Recursive function calls like return merge(mergeSort(firstHalf), mergeSort(secondHalf)); always are limited to the (OS defined) stack size regarding the call stack depth. On large input lists this may bail out with a Stack Overflow error. You can simply replace that with a std::stack<std::pair<std::vector<float>,std::vector<float>>> structure that is handled within a loop. • Vectors are being modified inside, so const won't work. – Raziman T V Jan 21 '17 at 20:07 • @RazimanT.V. Ah, the pop_back() stuff, yes. THX, I've been overlooking that. – πάντα ῥεῖ Jan 21 '17 at 20:09
{}
### 统计代写|随机过程作业代写stochastic process代考| Wald’s fundamental identity statistics-lab™ 为您的留学生涯保驾护航 在代写 随机过程stochastic process方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写随机过程stochastic process方面经验极为丰富,各种代写 随机过程stochastic process相关的作业也就用不着说。 • Statistical Inference 统计推断 • Statistical Computing 统计计算 • Advanced Probability Theory 高等楖率论 • Advanced Mathematical Statistics 高等数理统计学 • (Generalized) Linear Models 广义线性模型 • Statistical Machine Learning 统计机器学习 • Longitudinal Data Analysis 纵向数据分析 • Foundations of Data Science 数据科学基础 ## 统计代写|随机过程作业代写stochastic process代考|Wald’s fundamental identity Let $X_{1}, X_{2}, \ldots$ are i.i.d. r.v.s with $S_{n}=X_{1}+X_{2}+\ldots+X_{n}$ and $N$ is a stopping rule. Let $F_{n}(x)=P\left[S_{n} \leq x\right], F_{1}(x)=F(x)=P\left[X_{1} \leq x\right]$ and m.g.f. of $X_{1}$ is given by $\phi(\theta)=\int_{-\infty}^{\infty} e^{\theta x} d F(x)<\infty$ if $\phi(\sigma)<\infty$, where $\sigma=\operatorname{Re}(\theta)$ We also assume that $$\phi(\sigma)<\infty \text { for all } \sigma,-\beta<\sigma<\alpha<\infty, \alpha, \beta>0$$ Under these conditions, $P\left[e^{X}<1-\delta\right]>0$ and $P\left[e^{X}>1+\delta\right]>0, \delta>0$. $\phi(\theta)$ has a minimum at $\theta=\theta_{0} \neq 0$, where $\theta_{0}$ is the root of the equation $\phi(\theta)=1 .$ Wald’s Sequential Analysis presented the so-called Wald’s identify $$E\left(e^{\theta S_{N}} /[\phi(\theta)]^{N}\right)=1 \text { for } \phi(\theta)<\infty \text { and }|\phi(\theta)| \geq 1 \text {. }$$ Actually we shall give the proof of a more general theorem in Random walk due to Miller and Kemperman (1961). Define $F_{n}(x)=P\left[S_{n} \leq x ; N \geq n\right], N=\min \left{n \mid S_{n} \notin(-b, a), 0<a, b<\infty\right}$ and the series $F(z, \theta)=\sum_{n=0}^{\infty} z^{n} \int_{-b}^{a} e^{\theta x} d F_{n}(x)$. Then $$E\left(e^{\theta S_{N}} z^{N}\right)=1+[z \phi(\theta)-1] F(z, \theta) \text { for all } \theta$$ which is known as Miller and Kemperman’s Identity. If $\phi(q)=1 / z$ we get Wald’s Identify. Proof Let $F_{0}(x)=\left{\begin{array}{lll}0 & \text { if } & x \leq 0 \ 1 & \text { if } & x \geq 0\end{array}\right.$ and $\quad F_{n}(x)=P\left[S_{n} \leq x ; N \geq n\right], n \geq 1$ $$=P\left[-ba \text { ) }$$ is the joint probability that the time $N$ for absorption is $n$ and that the position reached when absorption occurs between $x$ and $x+d x$. Hence if we take Laplace transform with respect to $n$ and with respect to $x$ over absorbing states we have \begin{aligned} E\left(e^{\theta S_{N}} z^{N}\right) &=\sum_{n=1}^{\infty} z^{n}\left(\int_{-\infty}^{-b} e^{\theta x} d F_{n}(x)+\int_{a}^{\infty} e^{\theta x} d F_{n}(x)\right) \ &=\sum_{n=1}^{\infty} z^{n}\left(\int_{-\infty}^{\infty}-\int_{-b}^{a}\right) e^{\theta x} d F_{n}(x) \ &=\sum_{n=1}^{\infty} z^{n} \int_{-\infty}^{\infty} e^{\theta x} d F_{n}(x)-F(z, \theta)+1 \end{aligned} where $F(z, \theta)=\sum_{n=0}^{\infty} z^{n} \int_{-h}^{a} e^{\theta x} d F_{n}(x)$. ## 统计代写|随机过程作业代写stochastic process代考|Fluctuation Theory In this section $X_{1}, X_{2}, \ldots, X_{n}, \ldots$ are i.i.d. r.v.s. Theorem $3.3$ If $E\left|X_{i}\right|<\infty$, then \begin{aligned} P[N(b)&<\infty]=1 \text { if } E X_{i} \leq 0 \ &<1 \text { if } E X_{i}>0 \end{aligned} For Proof see Chung and Fuchs (1951) and Chung and Ornstein (1962), Memoirs of American Math. Society. Definition $3.2$ If $S$ is uncountable, and $S_{n}=X_{1}+\ldots+X_{n}$ are Markov, $X_{i}$ ‘s being independent, then $x$ is called a possible value of the state space $S$ of the Markoy chain if there exits an $n$ such that $P\left[\left|S_{n}-x\right|<\delta\right]>0$ for all $\delta>0$. A state $x$ is called recurrent if $P\left[\left|S_{n}-X\right|<\delta\right.$ i.o. $]=1$ i.e. $S_{n} \varepsilon(x-\delta, x+\delta)$ i.o. with probability one. We shall conclude this section by stating two very important and famous theorems whose proofs are beyond the scope of this book. Theorem 3.4 (Chung and Fuchs) Either every state is recurrent or no state is recurrent. (ref. Spitzer-Random Walk (1962)). Theorem $3.5$ (Chung and Ornstein) If $E\left|X_{i}\right|<\infty$, then recurrent values exist iff $E\left(X_{i}\right)=0$. ## 统计代写|随机过程作业代写stochastic process代考|Exercises and Complements Exercise 3.1 In a simple random walk with two absorbing barriers at 0 and a let the position $X_{n}$ at the $n$th step be given by $X_{n}=X_{n-1}+Z_{n}$ where $Z_{n}$ ‘s are i.i.d. r.vs. taking values 1 and $-1$ with corresponding probabilities $p$ and $q=1-p$. Let $\pi_{k}(n)$. be the probability of absorption at 0 of the random walk in $n$-steps starting from position $k$. Show that the generating function $G_{k}(s)=\sum_{n=0}^{\infty} \pi_{k}(n) s^{n},|s|>1$ is given by $$(q / p)^{k} \frac{\lambda_{1}^{u-k}(s)-\lambda_{2}^{a-k}(s)}{\lambda_{1}^{a}(s)-\lambda_{2}^{a}(s)}$$ $$\lambda_{1}(s)=\frac{1+\left(1-4 p q s^{2}\right)^{1 / 2}}{2 p s}, \lambda_{2}(s)=\frac{1-\left(1-4 p q s^{2}\right)^{1 / 2}}{2 p s} .$$ Also show that $$\pi_{k}(n)=2^{n} p^{(n-k) / 2} q^{(n+k) / 2} \int_{0}^{1} \cos ^{n-1}(\pi x) \sin (\pi x) \sin (k \pi x) d x .$$ What will be the value of $\pi_{k}(n)$ in case of simple absorbing barrier at 0 when playing against an infinitely rich opponent? Exercise 3.2 In a random walk with two absorbing barriers at $-n$ and $a$, let the position $X_{n}$ at the $n$th step be given by $X_{n}=X_{n-1}+Z_{n}$. where $Z_{n}$ ‘s are i.i.d. r.v.s taking values 1 ,. $-1,0$ with corresponding probabilities $p, q, 1-p-q$. If $f_{j a}^{(n)}=P\left(-b<X_{1}, X_{2}, \ldots . X_{n-1}<a, X_{n}=a \mid X_{0}=j\right)$, Show that the generating function of $\left{f_{j a}^{(n)}\right}$ is given by $$F_{j a}(s)=\frac{\left[\lambda_{1}(s)\right]^{j+b}-\left[\lambda_{2}(s)\right]^{j+b}}{\left[\lambda_{1}(s)\right]^{a+b}-\left[\lambda_{2}(s)\right]^{a+b}}$$ where $\lambda_{1}(s)$ and $\lambda_{2}(s)$ are the roots of the equation $$p s \lambda^{2}-\lambda[1-s(1-p q)]+q s=0 .$$ If the random walk starts from the origin, what will be the expression of the generating function. ## 统计代写|随机过程作业代写stochastic process代考|Wald’s fundamental identity Wald’s Sequential Analysis 提出了所谓的 Wald 标识 ## MATLAB代写 MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
{}
## Convergence of a delayed policy update Q-learning 3 1 I thought about an algorithm that twists the standard Q-learning slightly, but I am not sure whether convergence to the optimal Q-value could be guaranteed. The algorithm starts with an initial policy. Within each episode, the algorithm conducts policy evaluation and does NOT update the policy. Once the episode is done, the policy is updated using the greedy policy based on the current learnt Q-values. The process then repeats. I attached the algorithm as a picture. Just to emphasize that the updating policy does not change within each episode. The policy at each state is updated AFTER one episode is done, using the Q-tables. Has anyone seen this kind of Q-learning before? If so, could you please kindly guide me to some resources regarding the convergence? Thank you! It's on-policy generalised policy iteration as written - kind of a hybrid of Monte Carlo control and SARSA - so I am surprised to see it called Q learning. Also, no exploration, which could be a weakness in some environments. Unfortunately I have not seen it before and would not know where to find convergence resources. Could you perhaps link to where you found it, because that may give some clues? – Neil Slater – 2020-05-22T23:20:37.867 @NeilSlater Thanks for the comment. This is what I had in mind and had it written down, so I do not have a link. As for no exploration, I think the action selection here is not a big deal. One can just replace the sampling with an epsilon greedy selection. Also, could you please explain why this thing looks like a SARSA? – Scott Guan – 2020-05-22T23:32:05.653 To make it off-policy you could change the Q value update step to take a max over possible actions in $s_{t+1}$ instead of making the on-policy update. Doesn't fix the lack of exploration, but does mean you would be estimating a different target polciy from current behaviour policy. – Neil Slater – 2020-05-22T23:35:35.633 Yes it would be simple enough to have the behaviour policy as epsilon greedy. I suggest you write it like that though, because it is an important detail if you want to talk about convergence. Without exploration, convergence guarantees will be weaker – Neil Slater – 2020-05-22T23:37:36.183 @NeilSlater I will update it. Thanks. – Scott Guan – 2020-05-23T00:47:16.923 Thanks for the update. The new algorithm is essentially Q learning with separate batched evaluation steps and policy update steps. Could you clarity what the $\epsilon$-greedy is evaluated over - is the greeedy action considered to be $\text{argmax}_a Q_t(s_t, a)$ or is it $\pi_i(s_t)$? This will make a difference to speed of convergence in some cases. – Neil Slater – 2020-05-23T08:34:48.207 @NeilSlater We can use either. Personally, I don't care about convergence rate or sample complexity for now, I just want to know regarding convergence. – Scott Guan – 2020-05-23T12:56:04.500 I suggest pick one in the algorithm, to make it concrete what the behaviour policy is based on. Although I might think it only affects convergence rate, it may actually affect convergence guarantees too. These kinds of details can be important. – Neil Slater – 2020-05-23T13:39:28.483
{}
# Multiple choice questions Multiple choice questions are not the most effective way to study. The reason they are commonly used in an academic setting is because they are easy to mark, and they allow the person studying to demonstrate their ability to recognize the correct answer, even if they can’t produce it themselves. When you need to pick from a selection of answers, it is easy to "cheat", as you can guess what the correct answer is if you know what the other displayed options are. When multiple choice tests are designed by humans, the test creator can create clever "distractors" that are similar to the correct answer, making it harder for you to guess. Computers are not so good at this. If you are studying for a test and you have a sample test with a multiple choice questions, like the following: Q: What animal has a really long neck? A: 1. A monkey. 2. A giraffe. 3. A donkey. 4. A snail. Then the recommended way to put that question into Anki is to drop the incorrect answers, turning it into a simple question/answer card instead: Q: What animal has a really long neck? A: A giraffe.
{}
# secondboot: secondboot In envlpaster: Enveloping the Aster Model ## Description A parametric bootstrap procedure evaluated at an envelope estimator of the submodel mean-value parameter vector τ that was obtained using eigenstructures or the 1d algorithm. ## Usage 1 2 secondboot(k, nboot2, out, model, index, data, amat, newdata, method = c("eigen","1d")) ## Arguments k The index of the top level parametric bootstrap procedure conducted by fit.boot.Efron that the second level of bootstrapping is being applied to. nboot2 The bootstrap sample size for the second level of parametric bootstrapping. out The output of fit.boot.Efron. model An aster model object. index The indices denoting which components of the canonical parameter vector are parameters of interest. data An asterdata object corresponding to the original data. amat This object can either be an array or a matrix. It specifies a linear combination of mean-value parameters that correspond to expected Darwinian fitness. See the aster function help page in the original aster package for more details. newdata A dataframe corresponding to hypothetical individuals in which expected Darwinian fitness is to be estimated. method The procedure used to obtain envelope estimators. ## Details This function implements the second level of the parametric bootstrap procedure given by either Algorithm 1 or Algorithm 2 in Eck (2015) with respect to the mean-value parameterization. This is detailed in Steps 4 through 5c in the algorithm below. At iteration b, this parametric bootstrap generates resamples from the distribution evaluated at the envelope estimator (\hat{τ}_{env}^{(b)}) of τ. In this case, the selected indices producing the eigenstructure which was used to construct the envelope estimator \hat{τ}_{env}^{(b)} are used to construct envelope estimators for the generated data. These resamples are used to estimate the variability of \hat{τ}_{env}^{(b)}. The algorithm using eigenstructures is as follows: 1. [1.] Fit the aster model to the data and obtain \hat{τ} = (\hat{γ}^T, \hat{υ}^T) and \hat{Σ} from the aster model fit. 2. [2.] Compute the envelope estimator of υ in the original sample, given as \hat{υ}_{env} = P_{\hat{G}}\hat{υ} where P_{\hat{G}} is computed using eigenstructures and selected via a model selection criterion of choice. 3. [3.] Perform a parametric bootstrap by generating resamples from the distribution of the aster submodel evaluated at \hat{τ}_{env} = (\hat{γ}^T,\hat{υ}_{env}^T)^T. For iteration b=1,...,B of the procedure: 1. [(3a)] Compute \hat{τ}^{(b)} and \widehat{Σ}_{υ,υ}^{(b)} from the aster model fit to the resampled data. 2. [(3b)] Build P_{\hat{G}}^{(b)} using the indices of \hat{Σ}_{υ,υ}^{(b)} that are selected using the same model selection criterion as Step 2 to build \hat{G}. 3. [(3c)] Compute \hat{υ}_{env}^{(b)} = P_{\hat{\mathcal{E}}}^{(b)}\hat{υ}^{(b)} and \hat{τ}_{env}^{(b)} = ≤ft(\hat{γ}^{(b)^T},\hat{υ}_{env}^{(b)^T}\right)^T. 4. [(3d)] Store \hat{τ}_{env}^{(b)} and g≤ft(\hat{τ}_{env}^{(b)}\right) where g maps τ to the parameterization of Darwinian fitness. 4. [4.] After B steps, the bootstrap estimator of expected Darwinian fitness is the average of the envelope estimators stored in Step 3d. This completes the first part of the bootstrap procedure. 5. [5.] We now proceed with the second level of bootstrapping at the b^{th} stored envelope estimator \hat{τ}_{env}^{(b)}. For iteration k=1,...,K of the procedure: 1. [(5a)] Generate data from the distribution of the aster submodel evaluated at \hat{τ}_{env}^{(b)}. 2. [(5b)] Perform Steps 3a through 3d with respect to the dataset obtained in Step 5a. 3. [(5c)] Store \hat{τ}_{env}^{(b)^{(k)}} and g≤ft(\hat{τ}_{env}^{(b)^{(k)}}\right). When the second level of bootstrapping is completed for all b = 1,...,B then this function reports the standard deviation of the bootstrapped envelope estimator of expected Darwinian fitness. In this case, the bootstrap procedure accounts for model selection volatility. The bootstrapped envelope estimator is \hat{μ}_g = \frac{1}{B} ∑_{b=1}^B g(\hat{τ}_{env}^{(b)}) where g(\hat{τ}_{env}^{(b)}) are the stored envelope estimators of expected Darwinian fitness in the env.boot.out matrix included in the output of fit.boot.Efron. The standard deviation of the bootstrapped envelope estimator of expected Darwinian fitness is ∑_{b=1}^B≤ft[\widehat{cov}^{(b)^T}\hat{V}^{-1}\widehat{cov}^{(b)}\right] / B where \widehat{cov}^{(b)} = \textbf{B}^{(b)^T} C^{(b)} / K and \hat{V} = \textbf{B}^{(b)^T}\textbf{B}^{(b)}/K. The matrix \textbf{B}^{(b)} \in R^{K\times p} has rows given by \hat{τ}_{env}^{(b)^{(k)}} - ∑_{k=1}^K\hat{τ}_{env}^{(b)^{(k)}}/K and the matrix C^{(b)} \in R^{K \times d} has columns given by g≤ft(τ_{env}^{(b)^{(k)}}\right) - g≤ft(τ_{env}^{(b)}\right) . For more details, see Efron (2014) and Eck (2015). The parametric bootstrap procedure which uses the 1d algorithm to construct envelope estimators is analogous to the above algorithm. To use the 1d algorithm, the user specifies method = "1d" instead of method = "eigen". ## Value sd.Efron The estimated standard deviation (sd) for estimated expected Darwinian fitness where is estimation is conducted using envelope methodology. This sd accounts for model selection volatility. An eigenvalue decomposition using eigen is used internally to calculate this quantity. cov A components needed to construct sd.Efron if other numerical methods are desired. V A components needed to construct sd.Efron if other numerical methods are desired. MLE.tau.boot.subsample A components needed to construct sd.Efron if other numerical methods are desired. est.env.subsample A components needed to construct sd.Efron if other numerical methods are desired. ## References Cook, R.D. and Zhang, X. (2014). Foundations for Envelope Models and Methods. JASA, In Press. Cook, R.D. and Zhang, X. (2015). Algorithms for Envelope Estimation. Journal of Computational and Graphical Statistics, Published online. doi: 10.1080/10618600.2015.1029577. Eck, D. J., Geyer, C. J., and Cook, R. D. (2016). Enveloping the aster model. \emph{in prep}. Eck, D.~J., Geyer, C.~J., and Cook, R.~D. (2016). Web-based Supplementary Materials for “Enveloping the aster model.” \emph{in prep}. Efron, B. (2014). Estimation and Accuracy After Model Selection. \emph{JASA}, \textbf{109:507}, 991-1007. ## Examples 1 ### Web-based Supplementary Materials for Enveloping the aster model.'' ### envlpaster documentation built on May 29, 2017, 3:31 p.m.
{}
2019-06-09 11:29:37 +0100 asked a question Directory problems Hi everybody. I would like to create some kind of structure(packages) in sage. Let's say I have the following directory structure: Main_Dir which contains: subdirectories [Dir_A, Dir_B, ...,Test ] Dir_A contains some sage files/classes file1.sage, file2.sage. In file1.sage I have load('file2.sage') My Test directory is the place where I would like to test all the functions I have created. It contains some test files test1.sage, test2.sage, ...., assemble_all.sage. In test1.sage I have the line load('../Dir_A/file1.sage'). And here I got the following error: raise IOError('did not find file %r to load or attach' % filename) IOError: did not find file './file2.sage' to load or attach I hope somebody has an answer to this kind of problems. 2019-06-09 11:28:44 +0100 asked a question Create new sage project Hi everybody. I would like to create some kind of structure(packages) in sage. Let's say I have the following directory structure: Main_Dir which contains: subdirectories [Dir_A, Dir_B, ...,Test ] Dir_A contains some sage files/classes file1.sage, file2.sage. In file1.sage I have load('file2.sage') My Test directory is the place where I would like to test all the functions I have created. It contains some test files test1.sage, test2.sage, ...., assemble_all.sage. In test1.sage I have the line load('../Dir_A/file1.sage'). And here I got the following error: raise IOError('did not find file %r to load or attach' % filename) IOError: did not find file './file2.sage' to load or attach I hope somebody has an answer to this kind of problems. 2018-02-18 15:15:17 +0100 asked a question Pullback of ideals Hi. I have the the following question and I hope that somebody of you has a good idea for the implementation and an explanation of the error. Offset: A number field K (in general non Galois) L the Galois closure of K phi: K --> L an arbitrary embedding of K into L I a fractional ideal in K and IL = phi(I) Question: How to compute the pullback of IL for (general) fractional ideals of K? My approach: Let V, W be two QQ vector spaces and f: V --> W a linear map (morphism). Let further V' and W' be subspaces of V and W. The aim is to identify the subspace V' = f^(-1)(W') as the preimage of W' under f. Let p: V x W' --> W be a linear map definied by (v,w')|--> f(v) - w' with ker(p):={(v,w')in V x W'| f(v)-w' = 0_W} = {(v,w'): f(v) = w'}. Such vectors v are the vectors in the preimage of W'. def inverseImage(IL, K, phi): ZK = K.maximal_order() dK = K.degree() BZK = ZK.basis() M = Matrix(QQ, [ list(phi(b)) for b in BZK ]) BJ = IL.basis() N = Matrix(QQ, [ list(b) for b in BJ ]) vs = M.stack(N).integer_kernel().basis() BI = [ sum([ v[i]*BZK[i] for i in [0..(dK - 1)] ]) for v in vs ] IK = ZK.fractional_ideal([num_IL/denom_IL, BI]) return IK EXAMPLE 1: sage: K = NumberField(x^6 - 2*x^5 - 6*x^3 + 151*x^2 + 76*x + 861, 'a') sage: L. = K.galois_closure() sage: phi = K.embeddings(L)[1] sage: I = K.fractional_ideal([129, x - 54]) sage: I_ = inverseImage(IL, K, phi) sage: I_ == I TRUE EXAMPLE 2 (and the first problem, loosing the denominator) sage: I = K.fractional_ideal([2/3]) sage: I Fractional ideal (2/3) sage: I_ = inverseImage(phi(I), K, phi) sage: I_ Fractional ideal (2) sage: I == I_ FALSE My improvement approach: ... num_IL = IL.numerator().gens()[0] denom_IL = IL.denominator().gens()[0] IK = ZK.fractional_ideal([num_IL/denom_IL, BI]) return IK Now EXAMPLE 2 returns the corresponding fractional ideal I_ BUT with an incorrect ideal I_ in EXAMPLE 1. Thanks for helping! 2017-12-06 22:37:56 +0100 received badge ● Student (source) 2017-12-05 19:58:40 +0100 received badge ● Scholar (source) 2017-12-05 19:54:39 +0100 received badge ● Editor 2017-12-05 19:53:37 +0100 answered a question Morphism between Klein four group and additive abelian group or order 4 Thank you for the fast answer. C is the always the class group of a quartic/sextic CM filed and G is the corresponding abstract group "printed by sage". Yes, I assume that I have to construct the inverse image too. 2017-12-05 11:03:14 +0100 asked a question Morphism between Klein four group and additive abelian group or order 4 Hi. Let's say I have an ideal class C with structure G := C2 x C2. I want now to construct a morphism phi: G --> C, s.t. phi(g1) = c1 and phi(g2) = c2.
{}
Kernel smoother A kernel smoother is a statistical technique to estimate a real valued function ${\displaystyle f:\mathbb {R} ^{p}\to \mathbb {R} }$ as the weighted average of neighboring observed data. The weight is defined by the kernel, such that closer points are given higher weights. The estimated function is smooth, and the level of smoothness is set by a single parameter. This technique is most appropriate for low-dimensional (p < 3) data visualization purposes. Actually, the kernel smoother represents the set of irregular data points as a smooth line or surface. Definitions Let ${\displaystyle K_{h_{\lambda }}(X_{0},X)}$ be a kernel defined by ${\displaystyle K_{h_{\lambda }}(X_{0},X)=D\left({\frac {\left\|X-X_{0}\right\|}{h_{\lambda }(X_{0})}}\right)}$ where: • ${\displaystyle X,X_{0}\in \mathbb {R} ^{p}}$ • ${\displaystyle \left\|\cdot \right\|}$ is the Euclidean norm • ${\displaystyle h_{\lambda }(X_{0})}$ is a parameter (kernel radius) • D(t) typically is a positive real valued function, which value is decreasing (or not increasing) for the increasing distance between the X and X0. Popular kernels used for smoothing include parabolic (Epanechnikov), Tricube, and Gaussian kernels. Let ${\displaystyle {\hat {Y}}(X):\mathbb {R} ^{p}\to \mathbb {R} }$ be a continuous function of X. For each ${\displaystyle X_{0}\in \mathbb {R} ^{p}}$, the Nadaraya-Watson kernel-weighted average (smooth Y(X) estimation) is defined by ${\displaystyle {\hat {Y}}(X_{0})={\frac {\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})Y(X_{i})}}{\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})}}}}$ where: • N is the number of observed points • Y(Xi) are the observations at Xi points. In the following sections, we describe some particular cases of kernel smoothers. Gaussian kernel smoother The Gaussian kernel is one of the most widely used kernels, and is expressed with the equation below. ${\displaystyle K(x^{*},x_{i})=\exp \left(-{\frac {(x^{*}-x_{i})^{2}}{2b^{2}}}\right)}$ Here, b is the length scale for the input space. Nearest neighbor smoother The idea of the nearest neighbor smoother is the following. For each point X0, take m nearest neighbors and estimate the value of Y(X0) by averaging the values of these neighbors. Formally, ${\displaystyle h_{m}(X_{0})=\left\|X_{0}-X_{[m]}\right\|}$, where ${\displaystyle X_{[m]}}$ is the mth closest to X0 neighbor, and ${\displaystyle D(t)={\begin{cases}1/m&{\text{if }}|t|\leq 1\\0&{\text{otherwise}}\end{cases}}}$ Example: In this example, X is one-dimensional. For each X0, the ${\displaystyle {\hat {Y}}(X_{0})}$ is an average value of 16 closest to X0 points (denoted by red). The result is not smooth enough. Kernel average smoother The idea of the kernel average smoother is the following. For each data point X0, choose a constant distance size λ (kernel radius, or window width for p = 1 dimension), and compute a weighted average for all data points that are closer than ${\displaystyle \lambda }$ to X0 (the closer to X0 points get higher weights). Formally, ${\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}},}$ and D(t) is one of the popular kernels. Example: For each X0 the window width is constant, and the weight of each point in the window is schematically denoted by the yellow figure in the graph. It can be seen that the estimation is smooth, but the boundary points are biased. The reason for that is the non-equal number of points (from the right and from the left to the X0) in the window, when the X0 is close enough to the boundary. Local linear regression In the two previous sections we assumed that the underlying Y(X) function is locally constant, therefore we were able to use the weighted average for the estimation. The idea of local linear regression is to fit locally a straight line (or a hyperplane for higher dimensions), and not the constant (horizontal line). After fitting the line, the estimation ${\displaystyle {\hat {Y}}(X_{0})}$ is provided by the value of this line at X0 point. By repeating this procedure for each X0, one can get the estimation function ${\displaystyle {\hat {Y}}(X)}$. Like in previous section, the window width is constant ${\displaystyle h_{\lambda }(X_{0})=\lambda ={\text{constant}}.}$ Formally, the local linear regression is computed by solving a weighted least square problem. For one dimension (p = 1): {\displaystyle {\begin{aligned}&\min _{\alpha (X_{0}),\beta (X_{0})}\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-\alpha (X_{0})-\beta (X_{0})X_{i}\right)^{2}}\\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\Downarrow \\&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\hat {Y}}(X_{0})=\alpha (X_{0})+\beta (X_{0})X_{0}\\\end{aligned}}} The closed form solution is given by: ${\displaystyle {\hat {Y}}(X_{0})=\left(1,X_{0}\right)\left(B^{T}W(X_{0})B\right)^{-1}B^{T}W(X_{0})y}$ where: • ${\displaystyle y=\left(Y(X_{1}),\dots ,Y(X_{N})\right)^{T}}$ • ${\displaystyle W(X_{0})=\operatorname {diag} \left(K_{h_{\lambda }}(X_{0},X_{i})\right)_{N\times N}}$ • ${\displaystyle B^{T}=\left({\begin{matrix}1&1&\dots &1\\X_{1}&X_{2}&\dots &X_{N}\\\end{matrix}}\right)}$ Example: The resulting function is smooth, and the problem with the biased boundary points is solved. Local linear regression can be applied to any-dimensional space, though the question of what is a local neighborhood becomes more complicated. It is common to use k nearest training points to a test point to fit the local linear regression. This can lead to high variance of the fitted function. To bound the variance, the set of training points should contain the test point in their convex hull (see Gupta et al. reference). Local polynomial regression Instead of fitting locally linear functions, one can fit polynomial functions. For p=1, one should minimize: ${\displaystyle {\underset {\alpha (X_{0}),\beta _{j}(X_{0}),j=1,...,d}{\mathop {\min } }}\,\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-\alpha (X_{0})-\sum \limits _{j=1}^{d}{\beta _{j}(X_{0})X_{i}^{j}}\right)^{2}}}$ with ${\displaystyle {\hat {Y}}(X_{0})=\alpha (X_{0})+\sum \limits _{j=1}^{d}{\beta _{j}(X_{0})X_{0}^{j}}}$ In general case (p>1), one should minimize: {\displaystyle {\begin{aligned}&{\hat {\beta }}(X_{0})={\underset {\beta (X_{0})}{\mathop {\arg \min } }}\,\sum \limits _{i=1}^{N}{K_{h_{\lambda }}(X_{0},X_{i})\left(Y(X_{i})-b(X_{i})^{T}\beta (X_{0})\right)}^{2}\\&b(X)=\left({\begin{matrix}1,&X_{1},&X_{2},...&X_{1}^{2},&X_{2}^{2},...&X_{1}X_{2}\,\,\,...\\\end{matrix}}\right)\\&{\hat {Y}}(X_{0})=b(X_{0})^{T}{\hat {\beta }}(X_{0})\\\end{aligned}}}
{}
[texhax] white space problem Tom Schneider toms at ncifcrf.gov Wed Jul 22 01:30:21 CEST 2009 Robin: > >http://www.ccrnp.ncifcrf.gov/~toms/ftp/cell.sty > >http://www.ccrnp.ncifcrf.gov/~toms/ftp/cell.bst > > cell, and uploading will get the revised versions into texlive and > miktex. (for as long as they're necessary...) Done!! Thanks for suggesting that. > >> > > And if bibtex were anything but apparent abandonware, it would have > >> > > been changed to ignore spaces after commas many years ago. > >> > > >> > Isn't it a key part of the LaTeX system??? > >> > >> Yes, and it hasn't been maintained properly. > > or "at all" -- since the 1980s. an 8-bit version has been released > (by someone other than patashnik), but there's no sign of any other > work that i can detect. > > >:-( > > bibtex will probably disappear within the year as all but > stick-in-the-mud latex users switch to biber/biblatex > > (i don't know what plain users actually do for biblios nowadays, and i > don't know what they'll do then.) Looks like it is still alpha though. Tom Dr. Thomas D. Schneider National Institutes of Health schneidt at mail.nih.gov toms at alum.mit.edu (permanent) http://alum.mit.edu/www/toms
{}
zbMATH — the first resource for mathematics Vanishing theorems and string backgrounds. (English) Zbl 0990.53078 Authors’ summary: We show various vanishing theorems for the cohomology groups of compact Hermitian manifolds for which the Bismut connection has a (restricted) holonomy contained in $$SU(n)$$ and classify all such manifolds of dimension four. In this way we provide necessary conditions for the existence of such structures on compact Hermitian manifolds with vanishing first Chern class of non-Kähler type. Then we apply our results to solutions of the string equations and show that such solutions admit various cohomological restrictions such as, for example, that under certain natural assumptions the plurigenera vanish. We also find that under some assumptions the string equations are equivalent to the condition that a certain vector is parallel with respect to the Bismut connection. MSC: 53C80 Applications of global differential geometry to the sciences 81T30 String and superstring theories; other extended objects (e.g., branes) in quantum field theory 83E30 String and superstring theories in gravitational theory Full Text:
{}
# Voltages phasors in AC circuit What does it actually mean to add voltages vectorially not algebraically in RL or RC circuits (AC source)? I know that the voltage drop across the inductor is maximum when the voltage drop through the resistor is zero, then VR continues to increase and VL decreases, so why can't we simply add them when we calculate the total voltage drop across the two? what is the secret behind using vectors here? • To be clear, are you asking about adding time domain voltages (point wise) or adding the phasor domain voltages? Typically, AC circuits are analyzed in the phasor domain where the (phasor) voltages are complex constants. But, in your question, you're talking about time varying voltages so you must be thinking time domain? But in the time domain, voltages are simply added together. Please clarify what precisely you're asking. – Alfred Centauri Mar 9 at 20:40 so why can't we simply add them when we calculate the total voltage drop across the two? You do simply add the time domain voltages, at each moment in time, together. For example, stipulate that the resistor and inductor are series connected, and that the series current through, $$i(t)$$, is of the form $$i(t) = I_0\sin(\omega t)$$ The voltage across the series combination is simply the sum of the voltages across the individual circuit elements $$v(t) = RI_0\sin(\omega t) + \omega LI_0\cos(\omega t)$$ Now, you may know that you can write this as a single sinusoidal function: $$v(t) = \sqrt{(RI_0)^2 + (\omega L I_0)^2}\,\sin(\omega t + \phi),\quad \tan\phi = \frac{\omega L}{R}$$ Notice that the amplitude of this sinusoid is found by adding the amplitudes of the individual sine and cosine in quadrature (like finding the length of vector). What is the secret behind using vectors here? The result above is a hint. In the simple sum expression for $$v(t)$$, one term is a sine while the other is a cosine. These functions are orthogonal and so we can think of the sum as being the sum of two orthogonal 'vectors'. The result is a 'vector' where each term is a separate 'component'. Recall that a plane vector has a Cartesian form $$\vec v = (x, y)$$ as well as a polar form $$\vec v = (r, \phi)$$ where $$r = \sqrt{x^2 + y^2},\quad \tan\phi = \frac{y}{x}$$ So the second expression for $$v(t)$$ can be thought of as a kind of 'polar form' of $$v(t)$$. This type of thinking comes into clear focus when we switch from the time domain to the phasor domain where each voltage and current is a complex number that expresses the amplitude and phase only of the sinusoidal function of time. Since another answer addresses your question in the context of AC phasor analysis, I'll not go further into that here. Hyportnex has given you an explanation in the time domain. The following involves phasor notation, which is in the frequency domain. You need to add ac voltages vectorially for RC and LC circuits because the voltage and currents in inductors and capacitors are 90 degrees out of phase with each other, whereas the voltage and current are in phase for a resistor. This is a consequence of the following voltage current relationships for capacitors and inductors. $$i_{C}(t)=C\frac {dV_{C}(t)}{dt}$$ $$v_{L}(t)=L\frac {di_{L}(t)}{dt}$$ Note that if the voltage across the capacitor is a sine wave, the current will be the derivative of the sine wave, or cosine, which is $$90^0$$ out of phase with the sine. Similarly, if the current in an inductor is a sine wave, the voltage across the inductor will be a cosine. For the capacitor, the current leads the voltage by $$90^0$$. For an inductor the current lags the voltage by $$90^0$$. For a resistor the voltage and current are in phase (zero phase angle) The impedances for capacitors, inductors, and resistors can be written using phasor notations. $$Z_{C}=X_{C}\angle -90^0$$ $$Z_{L}=X_{L}\angle +90^0$$ $$Z_{R}=R\angle 0^0$$ where $$X_C$$ and $$X_L$$ are the capacitive and inductive reactances given by $$X_{C}=\frac {1}{2πfC}$$ $$X_{L}=2πfL$$ As an example, if we have a series RC circuit with a phasor current of $$I\angle 0^0$$, the voltages across the capacitor and resistor will be, in phasor, notation $$V_{C}=IZ_{C}=(I\angle 0^0)(X_{C}\angle -90^0) = IX_{C}\angle-90^0$$ $$V_{R}=IZ_{R}=(I\angle 0^0)(R\angle 0^0) = IR\angle 0^0$$ Since $$V_R$$ and $$V_C$$ are $$90^0$$ out of phase with each other, if you want to add them you will have to be added vectorially. You can get the magnitude of the complex voltage by taking the square root of the sum of the squares of the voltage magnitudes. You get the resulting phase angle between the real and imaginary components by taking the inverse tangent of $$\frac {-X_{C}}{R}$$. Hope this helps. • Just a pedantic note FWIW: many (myself included) write $Z = R + jX$, i.e., that reactance is the imaginary part of impedance, and it then follows that $X_C = -\frac{1}{2\pi f C}$ so that $Z_C = jX_C$ just as $Z_L = jX_L$. Others will write, as you have, that $X_C = \frac{1}{2\pi fC}$ but it then must be that $Z_C = -jX_C$ which conflicts with $Z = R + jX$. These 'dueling' conventions are discussed at the Wikipedia article here. – Alfred Centauri Mar 10 at 1:55 • @AlfredCentauri Yes, I'm aware of the fact that sometimes the sign (+/-) is assigned to the reactance and sometimes it is assigned to the imaginary number j. I was taught the latter (> 50 years ago) from the standpoint that reactance is the magnitude but no the sense. But it all comes out the same in the wash. Thanks for pointing it out. – Bob D Mar 10 at 2:03 You are adding two sinusoidal oscillations of the same frequency but differing in phase and amplitudes, say $$V_1 (t)= A_1 \cos(\omega t +\phi_1)$$ and $$V_2(t) = A_2 \cos(\omega t +\phi_2)$$. Now write this as $$V_1 +V_2 = A_1 \cos(\omega t +\phi_1) + A_2 \cos(\omega t +\phi_2)\\ =(A_1 \cos(\phi_1) + A_2 \cos(\phi_2)) \cos(\omega t) - (A_1 \sin(\phi_1) + A_2 \sin(\phi_2)) \sin(\omega t).$$ Set $$B = A_1 \cos(\phi_1) + A_2 \cos(\phi_2)$$ and $$C= -A_1 \sin(\phi_1) - A_2 \sin(\phi_2)$$ and notice that when you add the cartesian vectors $$\textbf{v}_1 = [A_1 \cos(\phi_1), -A_1 \sin(\phi_1)]$$ and $$\textbf{v}_2 = [A_2 \cos(\phi_2), -A_2 \sin(\phi_2)]$$ their sum has the same coordinates as $$\textbf{v}_1+\textbf{v}_2=[B,C]$$
{}
## College Algebra 7th Edition $S_{1}=\frac{2}{3}$ $S_{2}=\frac{8}{9}$ $S_{3}=\frac{26}{27}$ $S_{4}=\frac{80}{81}$ $S_n=\frac{3^n-1}{3^n}$ We are given: $a_{n}= \frac{2}{3^{n}}$ We find the partial sums: $S_{1}= \frac{2}{3^1}$ $S_{2}= \frac{2}{3^1}+\frac{2}{3^{2}}=\frac{8}{9}$ $S_{3}= \frac{2}{3^1}+\frac{2}{3^{2}}+\frac{2}{3^{3}}=\frac{26}{27}$ $S_{4}= \frac{2}{3^1}+\frac{2}{3^{2}}+\frac{2}{3^{3}}+\frac{2}{3^{4}}=\frac{80}{81}$ We notice that the partial sums have powers of $3$ in the denominator and $1$ value less in the numerator. Therefore: $S_n=\frac{3^n-1}{3^n}$
{}
## Precalculus (6th Edition) Blitzer The solutions of the equation are $x=-3,x=\frac{1}{2},x=1+\sqrt{3}$ ,and $x=1-\sqrt{3}$. To solve the polynomial equation $2{{x}^{4}}+{{x}^{3}}-17{{x}^{2}}-4x+6=0$. By the rational root theorem, ${{a}_{0}}=6\text{ , }{{\text{a}}_{n}}=2$ The factors of ${{a}_{0}}$ are: 1,2,3,6 and the factors of ${{a}_{n}}$ are: 1,2 Possible rational roots are: $\pm \frac{1,2,3,6}{1,2}$. $\frac{-3}{1}$ is a root of the expression, so factor out $x+3$. Compute $\frac{2{{x}^{4}}+{{x}^{3}}-17{{x}^{2}}-4x+6}{x+3}$ to get the rest of the equation: $2{{x}^{3}}-5{{x}^{2}}-2x+2$. $\left( x+3 \right)\left( 2{{x}^{3}}-5{{x}^{2}}-2x+2 \right)=0$ $x=-3$ , $x=\frac{1}{2}$ , $x=1+\sqrt{3}$ $x=1-\sqrt{3}$. The solutions of the equations are $x=-3,x=\frac{1}{2},x=1+\sqrt{3}$ ,and $x=1-\sqrt{3}$.
{}
# Spectral Sequence Arrows Revisited I don’t want to do much today. In fact, we’ll probably not cover any new ground, but start to fill in what is going on a little better. Notice that in order to work with the spectral sequence last time, we didn’t actually have to figure out what the ${i, j,}$ or ${k}$ or ${d}$ maps were. We only needed that they existed. The main thing of today is to work out all these sequences. Some are exact. Some are chain complexes. Some are commutative diagrams. Most of them fit together in a really organized way, other parts are just related. It is useful to see how these fit together, since sometimes you can get by without the maps as long as you know it is part of a commutative exact diagram. Recall that our current situation is that given a filtered chain complex ${\cdots K^{p-1}\subset K^p\subset \cdots \subset K}$ we get an exact couple ${(D_{p,q}, E_{p,q})}$ where ${D_{p,q}= H_{p+q}(K^p)}$ and ${E_{p,q}=H_{p+q}(K^p/K^{p-1})}$ by taking the long exact sequence in homology associated to the short exact sequence ${0\rightarrow K^{p-1}\rightarrow K^p\rightarrow K^p/K^{p-1}\rightarrow 0}$. Two times ago we saw exactly what the ${i_{p,q}}$, ${j_{p,q}}$ and ${k_{p,q}}$ maps were. Let’s develop them again, but being more careful as to how they all fit together. Well ${i_{p,q}: H_{p+q}(K^p)\rightarrow H_{p+q}(K^{p+1})}$ and it comes from the induced maps on homology from the injection ${K^p\hookrightarrow K^{p+1}}$. So we get infinite strings of ${i_{p,q}}$ for fixed ${q}$ that relate as injections ${p}$ changes. i.e we’ll get as part of a large diagram columns that look like ${\cdots \rightarrow H_{p+q}(K^{p-2})\rightarrow H_{p+q}(K^{p-1})\rightarrow H_{p+q}(K^p)\rightarrow H_{p+q}(K^{p+1})\rightarrow\cdots}$. The ${j_{p,q}: H_{p+q}(K^p)\rightarrow H_{p+q}(K^p/K^{p-1})}$ are induced on homology from the projection map ${K^p\rightarrow K^p/K^{p-1}}$. The ${k_{p,q}: H_{p+q}(K^p/K^{p-1})\rightarrow H_{p+q-1}(K^{p-1})}$ comes from the snake lemma. It is the boundary map in the long exact sequence. Note that these last two will fit into rows: ${\cdots \rightarrow H_{p+q}(K^p)\stackrel{j}{\rightarrow} H_{p+q}(K^p/K^{p-1})\stackrel{k}{\rightarrow} H_{p+q-1}(K^{p-1})\stackrel{j}{\rightarrow} H_{p+q-1}(K^{p-1}/K^{p-2})\rightarrow\cdots}$. Thus we can form a large commutative diagram from these long exact sequences, and figure out how to extrapolate the exact couple from it. Below is the diagram relating what was just said: (Sorry about the image. I’ll figure this out eventually. For now you can click on it and zoom to get the full size). The labelled arrows are the ${i,j,k}$ maps of just a single exact couple. Note that there are infinitely many exact couples going on here, and they are all related. Note also that the exact couples don’t just go across or down. Now let’s figure out the differentials and how to get the spectral sequence. Well, take the ${H_{p+q}(K^p/K^{p-1})}$ spot. Recall that in the exact couple, this is the ${E}$ term which is where our chain complex comes from. Then doing ${j\circ k}$ is the ${d}$ map. Note this is the really the ${d^1}$ map. So our page of ${E^1_{p,q}}$ comes from this row only. Take homology with respect to this, and due to the nature of the exact couples, we’ll get another page of these things. Now we are on the ${E^2_{p,q}}$ page of groups. The ${d^2}$ map now is to go right, then up, then right. This is the composition ${k\circ i^{-1} \circ j: H_{p+q}(K^p/K^{p-1})\rightarrow H_{p+q-1}(K^{p-2}/K^{p-3})}$. Take homology again. In general the ${d^r}$ map is going right, then up ${r-1}$ times then right one more time. This gives us exactly what we said last time, ${d^r_{p,q}: E^r_{p,q}\rightarrow E^r_{p-r, q+r-1}}$. Hopefully this helps clarify some of the inner workings of what is going on here. Next time we’ll talk about why the conditions from last time imply that the spectral sequence converges. We won’t prove it, but there are some nice concepts behind why it works that should be enlightening.
{}
pediatrics October 2011, VOLUME128 /ISSUE 4 # School Absenteeism Among Children Living With Smokers 1. Douglas E. Levy, PhDa,b,c, 2. Jonathan P. Winickoff, MD, MPHb,d,e,f, 3. Nancy A. Rigotti, MDa,b,c 1. aMongan Institute for Health Policy, Massachusetts General Hospital, Boston, Massachusetts; 2. bTobacco Research and Treatment Center, Massachusetts General Hospital, Boston, Massachusetts; 3. Departments of cMedicine and 4. fPediatrics, Harvard Medical School, Boston, Massachusetts; 5. dMGH Center for Child and Adolescent Health Policy, General Pediatrics Division, MassGeneral Hospital for Children, Boston, Massachusetts; and 6. eAmerican Academy of Pediatrics, Julius B. Richmond Center, Elk Grove Village, Illinois ## Abstract OBJECTIVE: Involuntary tobacco smoke exposure causes substantial morbidity in children. We hypothesized that children exposed to tobacco smoke in the home would have increased school absenteeism with associated costs due to lost caregiver wages/time. METHODS: We analyzed data on health and absenteeism among schoolchildren aged 6 to 11 years identified in the 2005 National Health Interview Survey (NHIS). We used multivariate models to assess the relationships between adult-reported household smoking and child health and school absenteeism. Analyses were adjusted for children's and parents' demographic and socioeconomic characteristics. The value of lost caregiver time was estimated by using self-reported employment and earnings data in the NHIS and publicly available time-use data. RESULTS: Children living with 1 or ≥2 adults who smoked in the home had 1.06 (95% confidence interval [CI]: 0.54–1.55) and 1.54 (95% CI: 0.95–2.12) more days absent from school per year, respectively, than children living with 0 smokers in the home. Living with ≥2 adults who smoked in the home was associated with increased reports of having ≥3 ear infections in the previous 12 months (adjusted odds ratio [aOR]: 2.65 [95% CI: 1.36–5.16]) and having a chest cold in the 2 weeks before interview (aOR: 1.77 [95% CI: 1.03–3.03]) but not with having vomiting/diarrhea in the previous 2 weeks (aOR: 0.93 [95% CI: 0.45–1.89]). Caregivers' time tending children absent from school was valued at $227 million per year. CONCLUSIONS: Tobacco smoke exposure has significant consequences for children and families above and beyond child morbidity, including academic disadvantage and financial burden. • secondhand smoke • school-aged children • caregivers • economic burden #### WHAT'S KNOWN ON THIS SUBJECT: Tobacco smoke exposure leads to respiratory illnesses in children. Geographically and demographically limited studies have suggested a link between living with a smoker and school absenteeism. #### WHAT THIS STUDY ADDS: In a nationally representative sample, we established that absenteeism among children aged 6 to 11 years living with smokers could be reduced 24% to 34% by eliminating smoking in their homes. Caregivers' lost wages/time due to child absenteeism was valued at$227 million per year. Involuntary tobacco smoke exposure (TSE), whether through secondhand smoke or third-hand smoke, is a common threat to child health. Thirty-four percent of children live with a smoker and at least 56% of children aged 3 to 11 years have detectable levels of serum cotinine, a preferred marker of TSE.1,2 There is no safe level of TSE.3 TSE has been linked to a range of adverse health outcomes in school-aged children, particularly respiratory conditions. These include otitis media, bronchitis, bronchiolitis, wheeze, cough, asthma, and pneumonia.3,,8 Long-term adverse outcomes include cognitive impairment, reduced lung function and development, and deficits in reading, math, and visiospacial reasoning.7,9,,13 Often, the conditions caused by TSE result in the need for medical care.14,,17 School absenteeism may be used as a general marker of morbidity that is easily assessed using survey methods.18 Geographically and demographically limited studies indicate that TSE exposure leads to school absenteeism in young children, and there has been some investigation of specific mechanisms.19,20 Not only is school absenteeism a measure of health, it also has non-health effects. Children frequently absent from school because of asthma or other chronic illnesses have poorer school performance, as well as poorer social and intellectual growth.21,,23 School absenteeism may also stress families emotionally and financially by inducing caregivers' workplace absenteeism. We analyzed federal survey data to provide the first national estimates assessing the effect of smoking in the home on school absenteeism, the mechanism through which TSE induces absenteeism, and the value of wages lost for caregivers who miss work to care for children home sick from school because of TSE. We hypothesized that children exposed to tobacco smoke in the home would have increased school absenteeism with associated costs because of lost caregiver wages/time. ## METHODS ### Data We examined data from the 2005 National Health Interview Survey (NHIS). The NHIS is an annual, nationally representative in-person survey. In each sampled household, additional sampling identifies 1 adult and 1 child to provide detailed health information. For children, a knowledgeable adult in the household, usually a parent, answered questions about the child. We restricted our analysis to children aged 6 to 11 years who were attending school. We excluded children aged 12 years and older to reduce the likelihood that tobacco smoke exposure was due to the child's own smoking. Absenteeism was defined as the number of school days missed because of illness or injury during the 12 months preceding the interview. The NHIS does not contain biological data on TSE, such as serum cotinine. However, in a 2005 supplement to the NHIS, the survey asked the sample adult whether any residents of a household smoked inside the home, and if so, how many. Sample children living in a household with a sample adult who reported that residents smoked in the home were considered exposed to TSE. We assessed whether there was a dose response in our measure of TSE corresponding to the number of residents smoking in the home by defining exposure as 0 residents smoking in the home, 1 resident smoking in the home, or ≥2 residents smoking in the home. To help connect school absenteeism with smoking in the home and to assess the robustness of our exposure measure, we also looked at specific health outcomes. Respondents reported on the sample child's overall health (fair or poor health versus excellent, very good, or good health) and indicated whether the child had ≥3 ear infections in the previous 12 months, a cold in the past 2 weeks, a current diagnosis of asthma, and whether the child had vomiting/diarrhea in the past 2 weeks. For children with asthma, respondents indicated the number of asthma attacks in the past year. On the basis of existing epidemiology studies, we hypothesized that the respiratory conditions (asthma, chest colds, ear infections) would be related to home smoke exposure, whereas vomiting/diarrhea would not.3 To further assist in establishing the relationship between TSE and school absenteeism, we tested whether the relationship between the presence of smokers in the home and school absenteeism was mediated by the abovementioned illnesses according to the Baron and Kenny framework.24 We controlled for illnesses that were significantly related to household smoking in models assessing the relationship between household smoking and absenteeism. Reductions in the magnitude of the relationship between household smoking and absenteeism when the illnesses were added to the regression models were taken as evidence that the illnesses mediated the relationship between household smoking and absenteeism. Our analyses accounted for child, family, and geographic characteristics that might potentially confound the relationship between household smoking and absenteeism. We controlled for child's age, gender, race, and Hispanic ethnicity; census region; family poverty and parent education; and number of children in the home, family structure (single mother, other single parent, or other family structure), and number of unemployed adults in the home. We posit that family structure and the number of unemployed adults in the home reflect the availability of potential caregivers when a child is sick. Pearson χ2 statistics were used to compare the characteristics of children who lived in homes with and without smoking and to make unadjusted comparisons of health states across household smoking values. We estimated multivariate regression models to assess the relationship between our outcomes and household smoking, controlling for child, family, and geographic characteristics. For our health outcomes, we estimated adjusted odds ratios (aORs) using logistic regression models. For number of school days missed, we estimated generalized linear models with a log link and the Poisson variance function. Confidence intervals (CIs) for regression coefficients were based on Wald tests. The mean number and the percentage of school days missed because of household smoking were calculated among children living in smoking households using predicted values from the estimated generalized linear model regressions. CIs for these statistics were based on the bootstrap method.25 All analyses were performed using Stata 10.1 (Stata Corp, College Station, TX), accounting for the complex design of the survey. Children's school absenteeism also has an economic cost because of caregivers' taking time off from work or other tasks to care for their children. Using established cost-of-illness methods,26,,29 we estimated the value of caregiver time by multiplying the predicted number of school days missed for each child in the sample by the value of a day of that child's caregiver's time. For caregivers who were employed, we valued time using his or her daily earnings as self-reported or imputed in the NHIS.30 If a caregiver was unemployed, we assigned a value for his or her time based on lost household production (ie, cooking, cleaning, household management) as valued in the American Time Use Survey and the Occupational Employment Statistics program, and synthesized in the 2005 edition of The Dollar Value of a Day: 2005 Dollar Valuation.31 Household production was valued according to what it would cost to hire someone else to complete the foregone household tasks. If time to care for a sick child came from a caregiver's leisure time rather than his or her household production time, we will have underestimated the economic cost of caring for the sick child because leisure time is valued slightly higher than household production time. The caregiver was defined as the mother or female guardian, if one was present, or the father/male guardian if there was no mother/female guardian in the home. This use yields a conservative estimate because women on average earn less than men,32 and if a man was the caretaker for the child, the cost would be higher. Our estimates will also be somewhat conservative because we do not account for days of work missed to care for sick children during school vacations. ## RESULTS More than 14% of children in our sample, representing 2.6 million children in the United States, lived in a household in which at least 1 resident smoked inside the home; 8% had 1 household member who smoked in the home, and 6% had ≥2 household members who smoked in the home. Demographic distinctions between households with and without inside smoking were similar to those found when comparing smokers with nonsmokers (Table 1). Households with no indoor smoking tended to be more educated, have a higher income, and more likely to be Hispanic (all P < .001). They were also less likely to have been in the south and more likely to have been in the west (P < .001). Compared with households with 1 person smoking indoors, those with ≥2 people smoking indoors had higher incomes and were more likely to be white (P < .001 and P = .02, respectively, P values not shown in Table 1). TABLE 1 Characteristics of the Study Population Living with a smoker was associated with both of our measures of school absenteeism (Table 2). The likelihood of missing any school was higher for those living in homes in which there was 1 person who smoked in the home (aOR: 1.68 [95% CI: 1.20–2.34]) than in homes where no one smoked indoors. The number of days a child was absent from school was significantly higher for those living in homes in which smoking took place than for those living in smoke-free homes, and greater numbers of household smokers led to increased absenteeism. Children living with exactly 1 person smoking in the home missed 1.06 (95%: CI 0.54–1.55) additional school days per year, and those living with ≥2 smokers missed 1.54 (95% CI: 0.95–2.12) more days of school per year than they would have if they lived in smoke-free homes. Among children living with exactly 1 or with at least 2 smokers, 24% (95% CI: 14–32) and 34% (95% CI: 24–43), respectively, of school days missed were attributable to residents' smoking. TABLE 2 Adjusted Relationships Between Household Smoking and Absenteeism In Table 3, we provide evidence that increased absenteeism among children living in homes in which smoking takes place is due in part to TSE-induced illnesses assessed in the NHIS. Living with a smoker was associated with both of our measures of respiratory infection, and there was modest evidence of a dose-response or threshold effect. The likelihood that a child had ≥3 ear infections in the previous 12 months increased with the number of residents smoking in the household, and was significantly higher among children with at least 2 people who smoked in the home (aOR: 2.65 [95% CI: 1.36–5.16]). Reports of a chest infection in the 2 weeks before the interview were similar for children with 0 or 1 residents smoking in the home but were significantly elevated among children living with at least 2 people who smoked in the home (aOR: 1.77 [95% CI: 1.03–3.03]). An apparent relationship between living with 1 person smoking in the home and fair or poor self-reported health status was not statistically significant at the P = .05 level (aOR: 2.11 [95% CI: 0.93–4.79]). We were unable to detect any relationship between household smoking and prevalent asthma or asthma attacks among children with asthma. As hypothesized, we found no association between household smoking and whether the child had an episode of vomiting/diarrhea in the 2 weeks before the interview. TABLE 3 Relationship Between Household Smoking and Child Health Evidence that the effect of household smoking on school absenteeism was partially mediated by respiratory tract infections is presented in Table 4. For models of any days of school missed or number of days of school missed as a function of household smoking, we added control variables for respiratory tract infections (≥3 ear infections in the past 12 months, a chest cold in the past 2 weeks, each of which had significant relationships with household smoking) to the models estimated in Table 2. These conditions were chosen on the basis of the significant relationships identified in Table 3. For the regression modeling of the relationship between any days of school missed and household smoking, the coefficient on having 1 household smoker increased 1.1%, but the coefficient on having at least 2 smokers in the home decreased by 23.2% when respiratory tract infections were included in the model. Similarly, for the regression modeling the number of days a child was absent from school as a function of household smoking, the coefficient on having 1 household smoker increased 1.9% when respiratory tract infections were included in the model, but the coefficient on ≥2 household smokers decreased by 16.7%. TABLE 4 Changes in Exposure Coefficients With Addition of Respiratory Illness to Main Outcome Models For the economic analysis, we estimated that 69% of the caregivers were employed. The mean annual earnings of the employed caregivers was $20 087. In a year with 250 working days, this represents ∼$80 per day. Assuming caregivers missed work each time a child stayed home from school because of TSE, we estimate the value of this lost work time was $176 million in 2005. The mean value of household production for unemployed caregivers was$51 per day.31 The value of household production that would be missed to care for children sick because of TSE is estimated at $51 million for 2005. ## DISCUSSION Using national data, we present estimates of the relationship between household smoking and school absenteeism among children. Household smoking was associated with increased absenteeism overall, and as the number of residents smoking in the house increased, so did the number of school days missed by the children. We estimate that one-quarter to one-third of school days missed among children living with smokers are due to residents' smoking. We established a relationship between household smoking and 2 respiratory illnesses known to be associated with TSE exposure, and we identified modest evidence that these outcomes increased as the number of residents smoking in the home increased. At the same time, we found no relationship between household smoking and health problems unrelated to TSE exposure. The relationship between household smoking and absenteeism diminished when ear infections and chest colds were added to the model, suggesting the link between TSE and absenteeism is in part mediated by these 2 illnesses when there are 2 residents smoking in the home. Our results largely confirm the findings of earlier regional studies. One, focused on children in southern California in 1996,20 tracked school absences among fourth-graders to determine if they were due to respiratory illness, gastrointestinal illness, or other causes, and household smoking status was assessed by questionnaire in the same way it was assessed on the NHIS. The authors found that children living in households with smokers were at greater risk for absences due to respiratory illness but found no association between TSE exposure and nonillness absences. A second study, conducted among mostly Hispanic preschool through fifth grade students in Passaic, New Jersey, from 1997 to 2001, tracked the relationship between household asthma triggers and both asthma and absenteeism.19 Using TSE and outcome measures similar to ours, the authors found the relative risk of absenteeism was higher for children with TSE. However, aside from race/ethnicity, there were no sociodemographic controls used in this study. Our study has strengths beyond these studies by using a national sample, a wider age range of children than the California study, and a more thorough range of sociodemographic control variables than the New Jersey study. Furthermore, we report differences in the number of absences due to illness for exposed and unexposed children, establishing the magnitude of the problem, whereas the other studies reported only relative risks. In our study, we did not find a significant association between household smoking and asthma prevalence or attacks. The California study did find an association between household smoking and lower respiratory tract illness with wheeze, but they did not report whether the relationship was statistically significant. The New Jersey study found a significant relationship between household smoking and asthma, but they focused on a high-risk asthma population. Our finding contrasts with the Surgeon General's 2006 meta-analysis linking parent smoking to asthma prevalence.3 However, most individual studies included in the Surgeon General's analysis did not find statistically significant relationships between parent smoking and asthma prevalence; it was only in the pooled analysis that a significant finding emerged. Research on children with asthma has found that children with asthma-related morbidity have increased school absenteeism,29,33,,36 although there is mixed evidence on whether school outcomes are adversely affected.33,,35 Nevertheless, additional research is necessary to determine the extent to which the relationship between TSE and school absenteeism is mediated by asthma. We did confirm predicted relationships between smoking in the home and respiratory tract infections as measured by frequent ear infections and recent chest colds. These measures only partially mediated the relationship between TSE and absenteeism. Absenteeism is thus a useful, if imperfect, proxy for a broader range of specific health conditions, and it provides a highly tangible measure of TSE-induced functional limitation.18 Although it is clear that the prevention of illness itself is reason enough to push further expansion of home-smoking bans, establishing the effects of TSE exposure on school absenteeism also highlights other preventable consequences of the smoking epidemic. There is some evidence that chronic absenteeism due to illness is associated with poorer school achievement,34 although additional research is necessary to determine the extent to which the numbers of excess absences observed in the present study will lead to poor educational outcomes. Our finding that 24% to 34% of school absenteeism due to illness among children living in homes in which residents smoke is associated with TSE suggests that reductions in household smoking and overall smoking rates will greatly reduce the illness-related attendance problems of these exposed children. Beyond its impact on individual children, absenteeism has consequences for families and society.29,36 When young children are home from school, parents may miss time at work or have to find alternative sources of child care. Such a burden will be especially acute for low-income parents (nearly half of the smoking households in our population had family incomes ≤200% of the federal poverty level) and single parents (22% of the smoking households in our population were headed by single parents). Parents working low-paying jobs at small business may even be vulnerable to job loss. We conservatively estimated that$227 million worth of work/household production time may have been missed in 2005 to care for TSE-induced school absenteeism. In the event that parental circumstances prevent a sick child from staying home, illnesses may unnecessarily spread to the index child's classmates as well. Our study is subject to several limitations. As is to be expected in a large national survey that was not designed with TSE in mind, our measure of TSE is imprecise. We do not have direct data on children's personal TSE as measured by cotinine levels, for example, and we could not assess exposure that may have taken place outside the home. Nevertheless, as the reported number of people who smoked in the home increased, so did the number of school days missed, even after accounting for factors affecting the availability of caregivers. Together with the mediation analysis and the absence of a relationship between TSE and nonrespiratory illnesses, this suggests that family members' self-reports of smoking in the home are a reasonably proxy for TSE. As with any observational study, there may be confounding factors that were not measured and not included in the analysis. ## CONCLUSIONS In this first national study, household smoking is significantly associated with school absenteeism among children, a broad measure of children's health status and a direct assessment of functional limitation. These absences may result in costly missed work/household time for parents in families, many of whom are low income, are already financially burdened by the daily cost of cigarettes. Overall, these results illustrate the extent of tobacco's impact on child and family well-being, highlighting academic disadvantage and financial burden in families in which parents smoke. ## ACKNOWLEDGMENTS Dr Levy was funded by the Flight Attendant Medical Research Institute. Dr Winickoff was supported by the Julius B. Richmond Center of Excellence of the American Academy of Pediatrics through a grant from the Flight Attendant Medical Research Institute. Dr Rigotti was supported by National Heart Lung and Blood Institute grant K24-HL0440. ## Footnotes • Accepted June 1, 2011. • Address correspondence to Douglas E. Levy, PhD, Mongan Institute for Health Policy, Massachusetts General Hospital, 50 Staniford St, 9th Floor, Boston, MA 02114. E-mail: dlevy3{at}partners.org • Dr Levy conceived of and designed the study, acquired, analyzed, and interpreted the data, drafted the article and critically revised it, and takes full responsibility for the final submitted manuscript; Drs Winickoff and Rigotti assisted in interpreting the data and revising the manuscript. Each author accepts full responsibility for the final submitted manuscript. • FINANCIAL DISCLOSURE: The authors have indicated they have no financial relationships relevant to this article to disclose. • Funded by the National Institutes of Health (NIH). • TSE tobacco smoke exposure NHIS National Health Interview Survey aOR CI confidence interval 1. 1. 2. 2. 3. 3. 4. 4. 5. 5. 6. 6. 7. 7. 8. 8. 9. 9. 10. 10. 11. 11. 12. 12. 13. 13. 14. 14. 15. 15. 16. 16. 17. 17. 18. 18. 19. 19. 20. 20. 21. 21. 22. 22. 23. 23. 24. 24. 25. 25. 26. 26. 27. 27. 28. 28. 29. 29. 30. 30. 31. 31. 32. 32. 33. 33. 34. 34. 35. 35. 36. 36.
{}
Today, I watched this video on YouTube and was amazed how easy it is to decode POCSAG pagers. All you need is GQRX and Multimon. The only thing the video misses is a small shell script like this nc -l -u -p 7355 | sox -t raw -esigned-integer -b 16 -r 48000 - -esigned-integer -b 16 -r 22050 -t raw - | multimon-ng -t raw -a POCSAG512 -a POCSAG1200 -a POCSAG2400 -f alpha - Basically, you use netcat to open a UDP server (-l = listen) and start streaming samples from GQRX by pressing this button. You can configure the port, the one in the example is the default port of GQRX. As shown in the video, you have to use “Narrow FM” demodulator. A bandwidth of around 40k works pretty good for me. Here in Paderborn, Germany POCSAG seems to be used mainly by the ambulance. I tested it with my Ettus B210 and my RTL SDR dongle. Both worked fine. Have fun!
{}
# kilometre per second to inches per minute conversion Conversion number between kilometre per second [km/s] and inches per minute [ipm] is 2362204.7244095. This means, that kilometre per second is bigger unit than inches per minute. ### Contents [show][hide] Switch to reverse conversion: from inches per minute to kilometre per second conversion ### Enter the number in kilometre per second: Decimal Fraction Exponential Expression [km/s] eg.: 10.12345 or 1.123e5 Result in inches per minute ? precision 0 1 2 3 4 5 6 7 8 9 [info] Decimal: Exponential: ### Calculation process of conversion value • 1 kilometre per second = (exactly) (1000) / ((0.0254/60)) = 2362204.7244095 inches per minute • 1 inches per minute = (exactly) ((0.0254/60)) / (1000) = 4.2333333333333 × 10-7 kilometre per second • ? kilometre per second × (1000  ("m/s"/"kilometre per second")) / ((0.0254/60)  ("m/s"/"inches per minute")) = ? inches per minute ### High precision conversion If conversion between kilometre per second to metre-per-second and metre-per-second to inches per minute is exactly definied, high precision conversion from kilometre per second to inches per minute is enabled. Decimal places: (0-800) kilometre per second Result in inches per minute: ? ### kilometre per second to inches per minute conversion chart Start value: [kilometre per second] Step size [kilometre per second] How many lines? (max 100) visual: kilometre per secondinches per minute 00 1023622047.244095 2047244094.488189 3070866141.732284 4094488188.976378 50118110236.22047 60141732283.46457 70165354330.70866 80188976377.95276 90212598425.19685 100236220472.44095 110259842519.68504 Copy to Excel ## Multiple conversion Enter numbers in kilometre per second and click convert button. One number per line. Converted numbers in inches per minute: Click to select all ## Details about kilometre per second and inches per minute units: Convert Kilometre per second to other unit: ### kilometre per second Definition of kilometre per second unit: ≡ 1 km / 1 s. The speed with which the body moves 1 km in 1 second. The Earth's circulation speed is about 30 km/s. Convert Inches per minute to other unit: ### inches per minute Definition of inches per minute unit: ≡ 1 in / 60 s = 2.54 cm / 60 s. The speed with which the body moves 1 inch (or 2.54 centimetres) in 1 minute. ← Back to Speed units
{}
# Sensing water level electronically I am working on an electronic water sensing mechanism. I need to sense various levels of water as the water level is consumed and refilled in an underground water tank. Later, I want to control the sump motor with logic when various conditions are met. Focusing on just electronic sensing mechanism there are many choices available: • bias a transistor with the help of water as conductor • use logic gate inputs to become HIGH/LOW when water touches probes. There are several choices for gates, any gate can be used in general, even a NOT gate can do. • Ultrasonic depth sensing • LDR based • various other I am working with logic gates at the moment but will transfer it to a microcontroller later when I am satisfied with 'water sensing.' The problem with many of the techniques above is that their reliability changes with many environmental effects. Water vapour, hydration, corrosion, electrolysis, temperature and many similar factors change parameters which can affect changes in sensing. Currently I am working with CD4093 NAND. It has built in Schmitt trigger so it can shape inputs. I prefer to give oscillating inputs to this logic gate, so at least electrolysis can be prevented. Probes will be usually dipped in water. To perform oscillation in sensing probe I can use transistors (AC biased), 555, opamp. 1. Which one would be better option for oscillating in this scenario? 2. How can I simulate the properties of tap water in the reservoir e.g. water conductance and other electrical properties in NI multisim? Currently I ground the inputs with a switch to give the effect of water touching the probes in NI Multisim. How I can model the real water properties affecting my probes? 3. any other idea to achieve the task? I want 8 points of level detection, each point is one foot higher than the previous one. From full to empty, the difference is approximately 10 feet. This is an underground tank of concrete. I want it reliable and long lasting, free from errors and safe from enviromental changes. A one time install solution, no corrosion at probes, no electrolysis, no reading errors due to temp changes, hydration, vaporization, etc. Tank bottom is approximately 10 feet from top. Level readings at the moment will only light up LEDs but at the next phase they will be the inputs to a micro controller based logic algorithm implementation where two pumps will be in service in parallel but one at a time, next time next up, will take turns. several conditions will be checked to turn on the right pump, or to just give warning beeps, and more. This all will come in the next phase, later. • Updated - - - I understand all three oscillators I mentioned, AC biased transistors, 555 clock generator, op-amp generating square waves. I want an oscillator which will oscillate in +ve and -ve polarities, this scheme can avoid electrolysis. I think an op-amp will do that better. How can I make reading from probe non-oscillating so that it can be fed to a logic gate, otherwise logic gate output will also oscillate. Should I use de-bouncing before giving input to logic gates? Are there any stray capacitive/inductive or other parameters in tap-water other than the conductance (or resistance) that should be taken care of when considering digital inputs? I am trying to avoid any commercial sensors. I prefer building my own. ## Too Complicated/Expensive/Unreliable The approaches you suggest require expensive and power-consuming hardware with complex signal analysis to extract water-level information. All you need is a few \$4 float switches arranged at different depths like this: I had suggested this approach to another similar question. Now, life is really simple. Each of the switches corresponds to a different water level (you might want eight of them 1ft apart?). As the water level changes, more (or fewer) of the switches will be closed. This can be read with digital logic or a microcontroller extremely easily. If you want to spend more money to improve reliability: • You could use two (or more) independent arrays in parallel and compare the results to detect sensor failures. • Use more expensive (higher quality) float switches • Increase the number of float switches • +1. Your upgrades bucket list could do with another option: A graduate intern with a scale, using a dip-stick and toggling the pumps. j/k :-) Jun 16, 2013 at 4:53 • I am using something similar at the moment as temp solution. the dipped object actuates/de actuates mounted micro switches on top cover of tank with rise and fall in water level. this is a mechanical arrangement. I am thinking of something without moving parts. Jun 20, 2013 at 8:09 • Sure... but why? Jun 20, 2013 at 9:18 Just put a pressure sensor at the bottom of the tank, and measure the water pressure with a suitable MCU with an ADC. The water pressure is proportional to the depth, of course. Q.3 any other idea to achieve the task? Well, you are trying to avoid commercial sensors, but I am doing something similar using a pressure sensor Freescale MPX5050 and it's output is read via a micro analog port. As you, I wanted something that was installed permanently and very reliable. My setup is a copper pipe into the tank, this pipe is pressurized with just enough airflow to generate bubbles coming out of the pipe end which is at the bottom of the tank. The air pressure in the pipe is directly proportional to the water height. The drawback of this system is that you need a small air compressor + pressure regulator to generate air pressure. This cost money, but for me it was worth it. The advantage is no parts in the tank to corrode and maintain. I have two tanks, one potable water and one grey water. I want it reliable and long lasting, free from errors and safe from enviromental changes. a one time install solution, no corrosion at probes, no electrolysis, no reading errors due to temp changes, hydration, vaporization, ................... Let's see: Water + metal + electric potential. Tough call. The only solution I can think about is capacitive sensors where the metal plates are covered in plastic. Cheap? No. For a "set and forget" level of reliability, I would suggest using magnetic float switches. They are fairly cheap and have no moving parts that need a water tight seal. Use a stainless steel rod or any other non-corrosive material to hang 8 of these in a row so that they can trigger at the different water levels. You can get right angle switches, too, so it is easier to mount on a vertical stick. Each of these switches needs to be connected to a micrcontroller pin and ground. The terminal connected to the microcontroller pin must connected to a pull-up resistor (1K ohms would suffice) to the microcontroller's Vcc (5V for an Arduino Uno or 3.3V for a NodeMcu board.) A small capacitor (100nF) parallel to each switchs' terminals would be helpful to reduce switch bouncing noise. Assuming the switch is open when there is water at its level, then the input will be high and the level calculation can be as simple as: (assuming Arduino-like code) int waterLevel = digitalRead(sw0) + digitalRead(sw1) +... digitalRead(sw7); If the switch is closed when there is water at its level, then change that to: int waterLevel = !digitalRead(sw0) + !digitalRead(sw1) +... !digitalRead(sw7); 'waterLevel' will then vary between 0 and 8.
{}
# Configuring your model for CDM¶ After adding the CDM library to your model, you need to configure your model as to how, and which, data in your model needs to be managed through CDM. The main mechanism for setting this up are CDM data categories. In this section we’ll discuss how you should view CDM categories, and what you should keep in mind when setting up categories to prevent runtime problems. ## CDM categories¶ CDM categories are collections of identifiers in your model, data changes of which, from a functional point of view, need to be stored in the CDM database in a single transaction. CDM categories are not restricted to identifiers that can be contained within a single database table, they can contain both sets, parameters and variables of differing dimensions, and with differing domains. Categories can contain only a few identifiers, or a large amount of identifiers, as long as their functional role in the model is sufficiently similar to warrant them to be contained in a single category. ### Reasons to create categories¶ The following sections describe a number of functional reasons that may cause you to create separate categories. #### Independent scenarios¶ The main reason to separate identifiers in your model into different categories should be whether the state of the data of the identifiers in each category needs to be able to change independently of the data of identifiers in other categories. This allows you to have multiple users create different scenarios for single categories, and later on mix and match such scenarios for different categories together to create multiple combined data sets to work with. If you do not see a need to mix and match different scenarios for data of different categories, we advise to keep the number of categories as small as possible, as having to manage a large amount of categories inherently adds to the complexity of your application. Categories are completely unnecessary as a tool to create groups of identifiers to which a user is expected to make simultaneous changes. CDM stores change sets of individual changes in the first place and does not care whether such individual changes are stored in a single category or in multiple categories. Adding multiple categories does, however, add conceptual complexity to your model, and will actually result in having CDM to perform more actions to keep all categories in sync. #### Differing auto-commit requirements¶ Another reason for creating multiple categories could be when your application has different auto-commit requirements for different classes of identifiers during different phases of your app, as auto-commit behavior in CDM is arranged on a per-category level. If during one phase of your model you want your end-user to manually commit a complete set of changes, while at other locations in your app, you want changes to directly propagate to all other end-users of your application, then this may be a reason to separate the manually committed and auto-committed identifiers into two distinct categories. Auto-commit behavior is typically observed for AIMMS applications that have an operational nature, and where changes by one user directly influence the decision space of other users. #### Defined root sets¶ There is, however, one other major reason that actually necessitates the use of multiple categories, and is related to the internal workings of CDM. Before being able to pass any multi-dimensional data, CDM needs to pass all root set elements for all root sets used in the identifiers of a category first, and subsequently passes all other data in a two-staged approach, allowing it to by-pass any subset domains and domain conditions in the first stage, after which it is able to assign the values to the real identifiers in the correct order in the second stage. This approach poses a problem when root sets are defined in terms of other parameters in the model. A prime example is a calendar, which typically depends on two string parameters setting the begin and end date. When these string parameters have no value, the calendar will by empty, and CDM will not be able to set up the mapping between the numbering used by the CDM database and the internal element numbering of AIMMS, causing all further data transfer of calendar-related data to fail. The same holds for other root sets, which are defined in terms of other numerical, string or element parameters. The solution is then to store the parametric data used in the definition of such root sets, along with any other data on which such parameters depend in a separate category. When determining the order of reading data, the CDM library will then make sure that this category is actually read before any categories that depend on these parameters to populate root sets. ## Specifying categories¶ The CDM library uses model annotations to define CDM categories. Through the annotation cdm::Category you can assign the name of a category in which a single identifier, or all identifiers underneath a section in your model, should be placed. For instance, in the image below, all identifiers stored underneath the declaration section ConfigData will be placed in the CDM category of the same name. Identifiers underneath a section inherit its annotation values, and thus will end up in the same CDM category. If you want to exclude such an identifier from the category, you can do so by overriding the cdm::Category annotation. Although the cdm::Category annotation can be assigned to any type of identifier, the CDM library will only take sets (including calendars), parameters and variables into account, and will exclude all defined sets and parameters, as these can be recomputed, and thus do not need to be stored in through CDM. ## Taking care of dependencies¶ When assigning identifiers in your model to a CDM category, you need to pay special attention to other identifiers on which CDM-managed identifiers depend. Whenever you pull the data of CDM-managed identifiers from the versioned CDM database, trying to assign that data to the identifiers in the model, will most likely fail if the dependent identifiers don’t hold compatible values that allow the CDM-managed values to be assigned. In this manner you have to make sure that for all CDM-managed identifiers • all of its domain sets, element ranges, and super-sets must hold the set elements for which the versioned CDM database hold values. • any identifiers used in numerical ranges, calendar begin- and end-dates, or definitions of other dependent identifiers must hold values that allow the values of CDM-managed identifiers to be assigned from the versioned CDM database ### Special caution with respect to root sets¶ CDM will automatically ensure that all root sets and their elements will be represented in the CDM database, as these are necessary for translating label numbers as stored in the CDM database to element numbers in each AIMMS session. But not adding root sets explicitly to any CDM category will lead to behavior where such root sets will only grow over time. Deletion of root set elements in the AIMMS model will only be picked up by CDM if the root set is explicitly member of a CDM category. ### Ensuring full data compatibility¶ Full data compatibility will be ensured in a number of occasions: • The dependent identifiers themselves are managed through CDM, i.e., have been assigned to a CDM category themselves as well. • They are defined in terms of constants or other identifiers that are also managed by CDM. If you fail to meet this conditions, you may notice that checking out a snapshot from the CDM service may result in errors. ### Restrictions on identifier types¶ AIMMS CDM allows the following identifier types to be included in a category: • simple root and subsets, integer sets, indexed sets and relations • scalar and multi-dimensional numerical, element and string, parameters • scalar and multi-dimensional numerical and element variables AIMMS CDM does not support compound sets, and data defined over compound sets. ### Updating category contents¶ During the lifetime of your application, it is very likely that contents of the CDM categories you have specified will change. Such changes can consist of new identifiers that you have added to a category, or identifiers that have been deleted from a category, or of structural changes to existing identifiers. If you have already initialized an application database for a particular data category, the application database might have to be adapted, the next time you connect to it. You can indicate to the CDM library that you made changes to the existing category setup, by modifying the value of the string parameter cdm::DataSchemaVersion, which has an initial value of 1. The value of the data schema version is also stored in the application database, and each time the CDM library connects to it, it will check the version in the model with the version stored in the application database. If the value of cdm::DataSchemaVersion has changed, the CDM category will be checked for changes, and tables will be re-initialized where necessary. • If a new identifier has been added to the category, a new corresponding table will be added to the application database • If an identifier has been deleted from a category, the existing table will be detached from the category, but the corresponding table in the application database will not be deleted (as it still contains history) • If an identifier has been structurally changed, a new table will be created in the application database, but the old table, and all of its contents, will not be deleted as it contains the history of the identifier prior to the structural change. Note, however, that, currently, CDM does not know how to fill the new table based on the contents of the previous table. ### Dealing with name changes¶ When you simply change the name of identifiers in your model, but do make a structural change to the identifier, you may end up with a situation, where you already have a table in the CDM database still holding valid data, but which corresponds to the old identifier name. You can solve this situation by specifying the old identifier name for the cdm::AlternateName attribute of the identifier. This will cause the CDM service to first look whether an existing table for the alternate name already exists in the database with the correct dimensional structure, before actually creating a new table for the current identifier name. If the structure of the existing table differs for the structure of the actual identifier, the CDM service will then create a new table in the CDM database corresponding to the current name of the identifier. ## Initializing CDM support in your model¶ After you have identified the functional categories that you want CDM to work with, and assigned all identifiers you want to store in each category, the main procedure for actually creating and activating the categories is cdm::ConnectToApplicationDB. Based upon the categories you defined, The CDM library will determine the actual contents of these categories, the order in which categories themselves and all identifiers in each category need to be read in, based on their interdependencies. The determination of this order is determined within the procedure cdm::ProcessAnnotations, and you can debug the process using the AIMMS debugger, if the need arises. You can inspect the final resulting order through the parameters cdm::CategoryOrder and cdm::IdentifierOrder, where identifiers with a higher (absolute) order value depend on identifiers with a lower (absolute) order value, and the order value of defined identifiers is negated. Subsequently, the call to cdm::ConnectToApplicationDB will create the CDMRuntime runtime library, which will hold a number of shadow identifiers for each identifier managed by CDM. These shadow identifiers are used by the CDM library to track state and individual changes to the CDM-managed identifiers in your model during various stages of its operation. ### CDM backend specification¶ After the CDM support in your model has been set up, cdm::ConnectToApplicationDB will try to connect to an existing application database. How the CDM library will connect to the CDM service, is determined by various configuration parameters in the CDM library. • By setting cdm::UseEmbeddedServer to 1, the CDM library will start the embedded CDM service, taking its configuration CDMConfig.xml file from the folder specified by cdm::EmbeddedServerConfigFolder. The latter defaults to the Config subfolder of the main project folder. You can copy the CDMConfig.xml file from the Config subfolder of the AimmsCDM library there to get started, and adapt it to your needs. • If the application using the CDM library is deployed from within the AIMMS Cloud Platform, and cdm::CloudServiceName is set, the CDM library will connect to the an on-demand CDM service with the given name, or start such a service if has not been started yet. The service will connect to the MySQL database specified by • cdm::DatabaseHost • cdm::DatabaseUser • cdm::DatabasePassword Typically, these would point to the hostname and credentials of the MySQL application database that you can order with the AIMMS Cloud Platform. • If the application is deployed from an on-premise PRO server and cdm::TunnelContext is set, the CDM library will set up a PRO tunnel to the given tunnel context name. Such PRO tunnels can be configured by the PRO administrator in the PRO portal, and allow the client application to connect to an endpoint behind a firewall through the tunnel. In this case, the configured endpoint would be the service URI of a PRO service running behind the data center firewall. • In all other cases, the CDM library will connect directly to the URI configured through cdm::ServerURI, which will default to tcp://localhost:19999, i.e. a CDM service running on the local machine, and configured to listen on the default port. This is also the default port on which the embedded CDM will listen. ### Initial checkout of data¶ After connecting to the CDM service, the cdm::ConnectToApplicationDB function will create (or update) an application database and all of its related category tables, or verify that the existing category setup in the CDM database matches the CDM categories in the model. Which strategy it will choose depends on the parameter cdm::DataSchemaVersion which you should change whenever you make changes to the contents of categories. If this steps succeeds, the cdm::ConnectToApplicationDB function will perform a check-out of all categories in your model to the latest revision on the branch marked as global in the CDM database. By default, this will be the master branch. You can modify the global branch through the low-level API function cdm::SetGlobalBranch(). ### Logging CDM actions¶ You can add logging to your CDM-enabled application, by copying the file CDMLogConfig.cfg from the Config folder in the AimmsCDM library to the main project folder. After doing so, any CDM functionality will start logging its actions in more or less detail (depending on the log level set in CDMLogConfig.cfg) into the file CDMLog.xml. By default, all loggers will log at INFO level, i.e. report back a summary of any major CDM action executed from within the model. By setting the log level for specific loggers to TRACE, you will get very detailed information about the specific sub-components of CDM, which may help you find issues with your CDM setup. • The CDM logger will log all client-side actions, when setting the log level higher than INFO, this logger will create log lines with detailed information about the state of and actions upon all categories and CDM-managed identifiers therein when committing and pulling data. • The CDMService logger will log all server-side actions when using the embedded CDM service, corresponding to the client-side actions logged by the CDM logger. If you are not sure whether problems occur at the client- or server-side, this logger may provide you with the additional information necessary to debug the issue. • The CDMDB logger will, at TRACE level, give you very specific information about the queries being executed within the database backing the CDM service. To interpret the logs created, you can use tools such as the free community version of Log4View to get a quick overview of any problems that may occur with your CDM setup. ## Model constructs to reconsider when using CDM¶ While AIMMS CDM has been designed to allow you to create true multi-user decision support applications with minimal effort, there are a number of model constructs that are fundamentally incompatible with CDM, or which may have undesired effects that you need to be aware of. In such cases, you are strongly advised to modify your model to circumvent such unwanted interactions. Because the affected areas typically revolve about the use of sets in manners that don’t live well with CDM, the effort to circumvent these problems is typically overseeable. ### Renaming elements¶ For any root set in your model that is managed through CDM, AIMMS CDM works with a global namespace, maintained in the central CDM database, providing a single revision-independent mapping between element names and globally assigned element numbers. This mapping is used by CDM to translate multiple-dimensional data from a global element numbering to a local element numbering that can be different for each client session, because of the potentially session-specific sequence of adding elements to the root sets used in the application. The element-name/-number mapping provided through these global namespaces needs to work for all clients at all times, that is, over all data revisions stored in the database, and for all available branches. Allowing elements to be simply renamed has the potential to break this paradigm. Typically, such set element renames take place at a particular point in time at a particular branch, which raises the question what should happen to data in other branches, and in past revisions. Should clients accessing such data see the old name, or the new one, and what should happen if a set element is renamed multiple times? Because there is no real good answer to these questions, AIMMS CDM will intercept all calls to the intrinsic AIMMS function SetElementRename in your model (as well as through the AIMMS API) and raise an execution error. So, if renaming set elements is not an option when using CDM, what other approaches are available? • If the name of an element changes frequently, you may opt to use an string parameter defined over the set for displaying the element name, instead of the element itself. As the displayed element name now has become data to which different values can be assigned at different revisions and in different branches, you have complete freedom to change the display name of the element as often as you see fit. • A different approach could be to clone the existing element to a new element with the desired name, and subsequently delete the existing element from the set in the branch at which you want to element to be renamed. In this manner all historic data already present in the data repository will remain untouched, while you will see the renamed element name with identical data in the branch at hand. ### Cloning elements¶ AIMMS CDM uses a CDMRuntime library containing various shadow identifiers for all CDM-managed identifiers in your model. These shadow identifiers are used to store your application’s state during various stages of the version control actions implemented by the CDM library. When you use the intrinsic CloneElement function in your model, AIMMS will clone an element in a given set, and replicate all data defined over the existing element in all identifiers anywhere in the model for the cloned element as well. Because this will also apply for the shadow identifiers create by the CDM library, the use of CloneElement will prevent the CDM library from detecting any data changes caused by cloning an existing element. Because of this unwanted side-effect, the CDM library will intercept all calls to the intrinsic AIMMS function CloneElement and raise a runtime error. If you run into this situation, you can simply replace the call to the CloneElement by a call to the function cdm::CloneElementInCategory(), which will replicate the data for all relevant identifiers in the given CDM category, but not in the shadow identifiers of the CDMRuntime library. A subsequent commit will then pickup the changes caused by cloning the element and store them in the CDM data repository. You may have to repeat this for other categories for which the element has been used as well. ### Deleting elements and calling the CleanDependents operator¶ When deleting elements from root sets in your model, this will cause all data defined over that set element to become inactive, or even to be deleted when the CleanDependents operator is called. Because the CDM library keeps the state of root sets used in a CDM category within data structures maintained within the DLL that accompanies the CDM library, CDM is still able to pick the element deletion, and also remove the element from the set in the data repository. However, this will by no means give you the certainty that the inactive data defined over that element will also be reset to their default value in the CDM database. The effect of this could be, that you will encounter data to possibly re-appear unexpectedly into your model, when checking out the data after the deleted element has been re-introduced in the data of the model. If you want to be certain that all inactive data is removed from the branch on which you want to delete the element, you can follow the approach described below: • Call the function cdm::EmptyElementInCategory(). This will remove all data for the given element from all multi-dimensional identifiers in the given CDM category, but will not yet delete the element from the root set. If you now commit the category, the data in the CDM database will be reset to their default value for the given branch. You may have to repeat this for other categories in which the element has been used as well. • You can subsequently delete the element from the root set, and remove the element in the CDM data repository as well through a final commit. ### Creating globally unique set element names¶ If your existing AIMMS application already supports multiple users, and you have designed a mechanism that allows users to create globally unique set elements, for instance, by using a centrally stored, and ever increasing integer value to make the element unique, you should reconsider where such a mechanism can create race conditions when used in combinations with CDM. Typically, using AIMMS CDM will increase concurrency compared to an app that does not use CDM. However, if the mechanism you selected to create the unique component of the set element name does not guarantee atomicity, you risk the situation that two end-users will inadvertently create the same element in the central CDM data repository. You can counteract this by revisiting and adapting the mechanism you selected to create unique set element names by e.g. including the end-user initials, and use a user-dependent counter to create unique elements. At the cost of server roundtrip you can use the function cdm::NextUniqueInteger() to create a globally unique, always increasing, integer number in an atomic manner among all database clients. Alternatively, you can can forfeit the use of counter-based element names altogether and use the function cdm::CreateUuid() to create UUIDs (36-character globally unique hexadecimal strings) to uniquely represent set elements for all clients. This approach does not necessitate an additional call to the CDM service to create a globally unique element name. You can then use a string parameter to define a more user-friendly display name for such elements. Last Updated: January, 2020
{}
Change the chapter Question (a) On a day when the intensity of sunlight is $1.00\textrm{ kW/m}^2$, a circular lens 0.200 m in diameter focuses light onto water in a black beaker. Two polarizing sheets of plastic are placed in front of the lens with their axes at an angle of $20.0^\circ$. Assuming the sunlight is unpolarized and the polarizers are 100% efficient, what is the initial rate of heating of the water in $\textrm{C}^\circ\textrm{/s}$ , assuming it is 80.0% absorbed? The aluminum beaker has a mass of 30.0 grams and contains 250 grams of water. (b) Do the polarizing filters get hot? Explain. 1. $0.0103\textrm{ C}^\circ\textrm{/s}$ 2. Since the polarizers are 100% efficient, they don't absorb any of the sunlight's energy. Therefore they do not get hot. Solution Video
{}
1 JEE Main 2017 (Online) 8th April Morning Slot MCQ (Single Correct Answer) +4 -1 The mean age of 25 teachers in a school is 40 years. A teacher retires at the age of 60 years and a new teacher is appointed in his place. If now the mean age of the teachers in this school is 39 years, then the age (in years) of the newly appointed teacher is : A 25 B 30 C 35 D 40 2 JEE Main 2016 (Online) 10th April Morning Slot MCQ (Single Correct Answer) +4 -1 The mean of 5 observations is 5 and their variance is 124. If three of the observations are 1, 2 and 6 ; then the mean deviation from the mean of the data is : A 2.4 B 2.8 C 2.5 D 2.6 3 JEE Main 2016 (Online) 9th April Morning Slot MCQ (Single Correct Answer) +4 -1 If the mean deviation of the numbers 1, 1 + d, ..., 1 +100d from their mean is 255, then a value of d is : A 10.1 B 20.2 C 10 D 5.05 4 JEE Main 2016 (Offline) MCQ (Single Correct Answer) +4 -1 If the standard deviation of the numbers 2, 3, a and 11 is 3.5, then which of the following is true? A 3$$a$$2 - 26$$a$$ + 55 = 0 B 3$$a$$2 - 32$$a$$ + 84 = 0 C 3$$a$$2 - 34$$a$$ + 91 = 0 D 3$$a$$2 - 23$$a$$ + 44 = 0 JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination JEE MainJEE AdvancedWB JEE Graduate Aptitude Test in Engineering GATE CSEGATE ECEGATE EEGATE MEGATE CEGATE PIGATE IN Medical NEET © ExamGOAL 2023
{}
# The Stacks Project ## Tag 09AW Lemma 15.83.4. Let $A$ be a ring and $I \subset A$ an ideal. Suppose given $K_n \in D(A/I^n)$ and maps $K_{n + 1} \to K_n$ in $D(A/I^{n + 1})$. Assume 1. $A$ is $I$-adically complete, 2. $K_1$ is a perfect object, and 3. the maps induce isomorphisms $K_{n + 1} \otimes_{A/I^{n + 1}}^\mathbf{L} A/I^n \to K_n$. Then $K = R\mathop{\rm lim}\nolimits K_n$ is a perfect, derived complete object of $D(A)$ and $K \otimes_A^\mathbf{L} A/I^n \to K_n$ is an isomorphism for all $n$. Proof. Combine Lemmas 15.83.3 and 15.83.2 (to get derived completeness). $\square$ The code snippet corresponding to this tag is a part of the file more-algebra.tex and is located in lines 22640–22653 (see updates for more information). \begin{lemma} \label{lemma-Rlim-perfect-gives-complete} Let $A$ be a ring and $I \subset A$ an ideal. Suppose given $K_n \in D(A/I^n)$ and maps $K_{n + 1} \to K_n$ in $D(A/I^{n + 1})$. Assume \begin{enumerate} \item $A$ is $I$-adically complete, \item $K_1$ is a perfect object, and \item the maps induce isomorphisms $K_{n + 1} \otimes_{A/I^{n + 1}}^\mathbf{L} A/I^n \to K_n$. \end{enumerate} Then $K = R\lim K_n$ is a perfect, derived complete object of $D(A)$ and $K \otimes_A^\mathbf{L} A/I^n \to K_n$ is an isomorphism for all $n$. \end{lemma} \begin{proof} Combine Lemmas \ref{lemma-Rlim-perfect-gives-perfect} and \ref{lemma-Rlim-pseudo-coherent-gives-complete-pseudo-coherent} (to get derived completeness). \end{proof} There are no comments yet for this tag. ## Add a comment on tag 09AW In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the lower-right corner).
{}
# JEE Main & Advanced Chemistry Equilibrium / साम्यावस्था Ionic Product Of Water Ionic Product Of Water Category : JEE Main & Advanced Water is a weak electrolyte and undergoes selfionistion to a small extent. The product of concentrations of ${{H}^{+}}$ and $O{{H}^{-}}$ ions in water at a particular temperature is known as ionic product of water.” It is designated as ${{K}_{w}}$. ${{H}_{2}}O$ ? ${{H}^{+}}+O{{H}^{-}}$; $\Delta H=+57.3\ kJ{{M}^{-1}}$ $K=\frac{[{{H}^{+}}][O{{H}^{-}}]}{[{{H}_{2}}O]}$;$K[{{H}_{2}}O]=[{{H}^{+}}][O{{H}^{-}}]$;${{K}_{w}}=[{{H}^{+}}][O{{H}^{-}}]$ The value of ${{K}_{w}}$ increases with the increase of temperature, i.e., the concentration H+ and OH ions increases with increase in temperature. The value of ${{K}_{w}}$ at ${{25}^{o}}C$ is $1\times {{10}^{-14}}$mole/litre. Since pure water is neutral in nature, ${{H}^{+}}$ ion concentration must be equal to $O{{H}^{-}}$ ion concentration. $[{{H}^{+}}]=[O{{H}^{-}}]=x$ or $[{{H}^{+}}][O{{H}^{-}}]={{x}^{2}}=1\times {{10}^{-14}}$ or $x=1\times {{10}^{-7}}M$ or  $[{{H}^{+}}]=[O{{H}^{-}}]=1\times {{10}^{-7}}\ mole\ litr{{e}^{-1}}$ This shows that at ${{25}^{o}}C$, in 1 litre only ${{10}^{-7}}$ mole of water is in ionic form out of a total of approximately 55.5 moles. Thus when, $[{{H}^{+}}]=[O{{H}^{-}}]$; the solution is neutral $[{{H}^{+}}]>[O{{H}^{-}}]$; the solution is acidic $[{{H}^{+}}]<[O{{H}^{-}}]$; the solution is basic #### Other Topics You need to login to perform this action. You will be redirected in 3 sec
{}
A function that writes a correctly formatted .csv file from a MSEtool Data object Data2csv(Data, file = NULL, simno = 1, overwrite = F, keepNAs = T) ## Arguments Data An object of class 'Data'. file Character string. The name of the location and file you wish to create (e.g. "C:/temp/mydata.csv") simno Integer. An optional argument to specify the simulation number if writing simulated data overwrite Boolean. Should existing data files be automatically overwritten. keepNAs Boolean. Should slots with NAs still be written to the data file. T. Carruthers
{}
An accurate modeling of thin film flows down an incline for inertia dominated regimes.(English)Zbl 1126.76304 Summary: An accurate modeling of a wavy film flow down an inclined plane is developed using the weighted residual technique which was first proposed by C. Ruyer-Quil and P. Manneville [Eur. Phys. J. B 15, 357–369 (2000)]. The model includes third order terms in order to better capture the effects of small Weber and high Reynolds numbers. This is made possible by an appropriate refinement of the velocity profile. To this end, a free parameter acting on the flexibility of the velocity profile is introduced. It is shown, from linear stability analysis that the model follows quite closely, for a suitable choice of $$\alpha$$, the Orr-Sommerfeld equation for all Weber and Kapitza numbers. The improvement is of course more substantial in the inertia dominated regimes. Some prominent qualitative and quantitative characteristics of traveling wave solutions are then derived from a simplified version of the model that is before hand converted into a three dimensional dynamical system. MSC: 76A20 Thin fluid films 76M25 Other numerical methods (fluid mechanics) (MSC2010) Full Text: References: [1] Kliakhandler, I.L.; Sivashinsky, G.I., Viscous damping and instabilities in stratified liquid film flowing down a slightly inclined plane, Phys. fluids, 9, 1, 23-30, (1996) [2] Nusselt, W., Die oberflachenkondensation des wasserdampfes, Z. ver. dtsch. ing, 60, 541-552, (1916) [3] Chang, H.C., Wave evolution on a falling film, Ann. rev. fluid mech, 26, 103-136, (1994) [4] Oron, A.; Gottlieb, O., Nonlinear dynamics of temporally excited falling liquid films, Phys. fluids, 14, 8, 2622-2636, (2002) · Zbl 1185.76289 [5] Liu, J.; Paul, J.D.; Gollub, J.P., Measurements of the primary instability of film flow, J. fluid mech, 250, 69-101, (1993) [6] Liu, J.; Gollub, J.P., Solitary wave dynamics of film flows, Phys. fluids, 6, 1702-1711, (1994) [7] Liu, J.; Schneider, J.B.; Gollub, J.P., Three dimensional instabilities of film flows, Phys. fluids, 7, 55-67, (1995) [8] Kapitza, P.L.; Kapitza, S.P., Wave flow of thin layers of viscous fluid, Zh. ekper. teor. fiz, 19, 105, (1949) [9] Brauner, N.; Maron, D.M., Characteristics of inclined thin film. waviness and the associated mass transfer, Int. J. heat mass transfer, 25, 99-110, (1982) [10] Benjamin, T.B., Wave formation in laminar flow down an inclined plane, J. fluid mech, 2, 554-574, (1957) · Zbl 0078.18003 [11] Yih, C.S., Stability of liquid flow down an inclined plane, Phys. fluids, 6, 321-330, (1963) · Zbl 0116.19102 [12] Hooper, A.P.; Boyd, W.G.C., Shear flow instability at the interface between two viscous fluids, J. fluid mech, 128, 507-528, (1983) · Zbl 0557.76044 [13] Kelly, R.E.; Goussis, D.A.; Lin, S.P.; Hsu, F.K., The mechanism of surface wave instability in film flow down an inclined plane, Phys. fluids, 1, 819-828, (1989) [14] Brevdo, L.; Laure, P.; Dias, F.; Bridges, T.J., Linear pulse structure and signalling in a film flow on an inclined plane, J. fluid mech, 396, 37-71, (1999) · Zbl 0982.76037 [15] Bach, P.; Villadsen, J., Simulation of the vertical flow of a thin, wavy film using a finite-element method, Int. J. heat mass transfer, 27, 815-827, (1984) · Zbl 0543.76040 [16] Kheshgi, H.S.; Scriven, L.E., Disturbed film flow on a vertical plate, Phys. fluids, 30, 990-997, (1987) [17] Choi, W.; Camassa, R., Fully nonlinear internal waves in a two-fluid system, J. fluid mech, 396, 1-36, (1999) · Zbl 0973.76019 [18] Benney, D.J., Long waves on liquid films, J. math. phys, 45, 150-155, (1966) · Zbl 0148.23003 [19] Pumir, A.; Manneville, P.; Pomeau, Y., On solitary waves running down an inclined plane, J. fluid mech, 135, 27-50, (1983) · Zbl 0525.76016 [20] Rosenau, P.; Oron, A.; Hyman, J.M., Bounded and unbounded patterns of the benney equation, Phys. fluids A, 4, 1102-1104, (1992) [21] Sivashinsky, G.I.; Michelson, D.M., On irregular wavy flow of a liquid film down a vertical plane, Prog. theor. phys, 63, 2112-2114, (1980) [22] Topper, J.; Kawahara, T., Approximate equations for long nonlinear waves on a viscous fluid, J. phys. soc. Japan, 44, 663-666, (1974) · Zbl 1334.76054 [23] Ooshida, T., Surface equation of falling film flows with moderate Reynolds number and large but finite Weber number, Phys. fluids, 11, 3247-3269, (1999) · Zbl 1149.76560 [24] Panga, M.K.R.; Balakotaiah, V., Low dimensional models for vertically falling viscous films, Phys. rev. lett, 90, 15, 1-4, (2003) [25] Shkadov, V.Y., Wave conditions in the flow of thin layer of a viscous liquid under the action of gravity, Izv. akad. nauk SSSR, mekh. zhidk. gaza, 1, 43-51, (1967) [26] Nguyen, L.T.; Balakotaiah, V., Modeling and experimental studies of wave evolution on free falling viscous films, Phys. fluids, 12, 2236-2256, (2000) · Zbl 1184.76391 [27] Lee, J.J.; Mei, C.C., Stationary waves on an inclined sheet of viscous fluid at high Reynolds and moderate Weber numbers, J. fluid mech, 307, 191-229, (1996) · Zbl 0859.76020 [28] Yu, L.Q.; Wadsen, F.K.; Duckler, A.E.; Balakotaiah, V., Nonlinear evolution of waves on falling films at high Reynolds number, Phys. fluids, 7, 1886-1902, (1995) · Zbl 1032.76556 [29] Kliakhandler, I.L., Inverse cascade in film flows, J. fluid mech, 423, 205-225, (2000) · Zbl 0979.76006 [30] Ruyer-Quil, C.; Manneville, P., Modeling film flows down an inclined planes, Eur. phys. J. B, 6, 277-292, (1998) [31] Ruyer-Quil, C.; Manneville, P., Improved modeling of flows down inclined planes, Eur. phys. J. B, 15, 357-369, (2000) [32] Ruyer-Quil, C.; Manneville, P., Further accuracy and convergence results on the modeling of flows down inclined planes by weighted residual approximations, Phys. fluids, 14, 170-183, (2002) · Zbl 1184.76467 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# ::hwat::utils::PositionDummyOnImport Positions the dummy’s H-point at the required coordinates and, if required, rotates the dummy’s back angle by the given amount. ## Syntax PositionDummy rootSysId hPointX hPointY hPointZ [backangle] [xRot] [yRot] [zRot] ## Arguments rootSysId The ID of the root system of the dummy. hPointX The x-coordinate of the final position of the H-point of the dummy. hPointY The y-coordinate of the final position of the H-point of the dummy. hPointZ The z-coordinate of the final position of the H-point of the dummy. backangle The angle to rotate the dummy about the y-axis after it has been positioned. Default = 0.0 xRot The angle to rotate the dummy about the x-axis after import. Default = 0.0 yRot The angle to rotate the dummy about the x-axis after import. Default = 0.0 zRot The angle to rotate the dummy about the x-axis after import. Default = 0.0 Success 1 Failure {} ## Example ::hwat::utils::PositionDummy 1234 3100 –370 500 10 0.0 0.0 180.0.
{}
# Chapter 10: The University of Rochester Model of Breast Cancer Detection and Survival Chapter 10: The University of Rochester Model of Breast Cancer Detection and Survival Abstract This paper presents a biologically motivated model of breast cancer development and detection allowing for arbitrary screening schedules and the effects of clinical covariates recorded at the time of diagnosis on posttreatment survival. Biologically meaningful parameters of the model are estimated by the method of maximum likelihood from the data on age and tumor size at detection that resulted from two randomized trials known as the Canadian National Breast Screening Studies. When properly calibrated, the model provides a good description of the U.S. national trends in breast cancer incidence and mortality. The model was validated by predicting some quantitative characteristics obtained from the Surveillance, Epidemiology, and End Results data. In particular, the model provides an excellent prediction of the size-specific age-adjusted incidence of invasive breast cancer as a function of calendar time for 1975–1999. Predictive properties of the model are also illustrated with an application to the dynamics of age-specific incidence and stage-specific age-adjusted incidence over 1975–1999. The purpose of the modeling effort described herein is twofold: Designing explanatory and predictive tools for quantitative description of the effects of breast cancer screening for various screening strategies, including the national trends in breast cancer incidence and mortality under the base case scenario Developing methods for statistical inference on the natural history of breast cancer in terms of biologically meaningful parameters In what follows, we use the term “prediction” to mean extrapolation of the basic epidemiological descriptors from one setting to another, including new interventions and risk factors, but not the problem of forecasting future population trends. The latter sort of model-based predictions would require sufficient knowledge of future changes in all components of the natural history of the disease, including future changes of cancer risk over time. See (1) for the discussion of this problem in regard to the age–period–cohort model. The traditional approach to mathematical or simulation modeling of cancer screening tends to describe the process of tumor development in only one dimension, that is, the time natural history. A broader methodological idea is to construct a stochastic model of cancer development and detection that yields the multivariate distribution of observable variables at the time of diagnosis (2). By focusing on such multivariate observations, rather than just on the age of patients at diagnosis, this idea seeks to invoke an additional source of information (available only at the time of detection) to improve estimation of unobservable parameters of cancer latency. Indeed, the process of tumor progression manifests itself as certain changes in many characteristics of the tumor. Therefore, this process is multidimensional in nature, and modeling tumor progression as a (linear) sequence of stages represents a poor approximation to a more general multivariate model of the natural history of cancer. The idea of multiple pathways of cancer progression was introduced in the path-breaking works by (3) and Feldstein and Zelen (4). In this paper, we base our inference on the natural history of breast cancer on two important variables, namely, the tumor size and the age of a patient at diagnosis, using mechanistic models of tumor development and detection to derive an analytic expression of the joint distribution of the said variables. In doing so, we take advantage of a mechanistic two-stage model of carcinogenesis to describe the “disease-free” stage of breast cancer development and the so-called quantal response model to relate a chance of detecting a tumor to its size; the latter mechanism applies equally to both incident and screen-detected (prevalent) cases. Some authors (5–7) have long realized the advantages of multivariate analysis in screening studies, but specific modeling and inferential techniques require a much higher level of sophistication than that in the earlier attempts at a comprehensive theory of cancer screening. The proposed model of the natural history of breast cancer has the following advantages: It is based on a minimal set of biologically plausible assumptions. It is proven to be completely identifiable. It is formulated in terms of probabilistic characteristics that can be estimated in the presence of data censoring, thereby requiring no demographic information for their evaluation. When applied to the data generated by randomized screening trials, the model allows estimation of all parameters by the method of maximum likelihood, whereas a subset of its parameters responsible for the progression (preclinical) stage of tumor development can independently be estimated from the population-based data available from the Surveillance, Epidemiology and End Results (SEER) National Cancer Institute (NCI) program. All parameters are estimated from epidemiological data, using the same model that makes them by far more reliable than those available from the literature, because the latter estimates have been obtained under dramatically dissimilar models and assumptions. When extrapolations are made to a different dataset, minimal calibration is needed, involving only those parameters that are likely (on biological grounds) to vary between the two sets of data. The predictive power of the model has been evaluated in several applications under strict conditions allowing no further calibration of any of its parameters already estimated (calibrated) in a different setting. The model has been built in part on the base case inputs shown in Table 1. Table 1.  Base case parameter usage* Parameter  Usage  Base case treatment dissemination  Not needed  Base case mammography dissemination  Used in provided form  Base case other-cause mortality  Not needed  Base case age-specific breast cancer incidence  Used for validation  Base case age-adjusted breast cancer incidence  Some values were used for calibration  Base case 1975 breast cancer prevalence  Not needed  Base case 1975 cause specific survival  Not needed  Base case historical survival  Not needed  Base case 1975 breast cancer mortality  Used for calibration  Base case breast cancer APC incidence  Uses a processed version of the standard parameter (relative risks)  Base case treatment effect  Not needed  Base case SEER 9 mortality  Used for calibration  Parameter  Usage  Base case treatment dissemination  Not needed  Base case mammography dissemination  Used in provided form  Base case other-cause mortality  Not needed  Base case age-specific breast cancer incidence  Used for validation  Base case age-adjusted breast cancer incidence  Some values were used for calibration  Base case 1975 breast cancer prevalence  Not needed  Base case 1975 cause specific survival  Not needed  Base case historical survival  Not needed  Base case 1975 breast cancer mortality  Used for calibration  Base case breast cancer APC incidence  Uses a processed version of the standard parameter (relative risks)  Base case treatment effect  Not needed  Base case SEER 9 mortality  Used for calibration  * APC = age–period–cohort; SEER = Surveillance, Epidemiology, and End Results. View Large The natural history model allows us to estimate the effects of screening on the age-specific cancer incidence and the distribution of major covariates at the time of diagnosis. This inference is entirely independent of the data on cancer mortality. To model the effect of screening on cancer-specific mortality, one needs to establish a quantitative relationship between clinical covariates (e.g., age, stage, tumor size) and postdetection survival of patients with breast cancer. Regression survival models are designed to estimate the survival time distribution conditional on covariate information, whereas the joint (multivariate) distribution of covariates at the time of diagnosis provides a link between the natural history of breast cancer and cancer-specific survival. The periodic screening evaluation methodology (8), although elegant, does not represent a strong alternative to flexible natural history models because of its many limitations—among which is the assumption that any effect of birth cohort is negligible. Our analysis of the Utah Population Database have shown that breast cancer risk varies substantially between birth cohorts separated by time intervals as short as 5 years. We resorted to a new class of extended hazard regression models with cure that has been extensively studied in recent years (9–15), to name a few. Even the simplest model from this class provides an excellent description of the effects of clinical covariates on cancer survival (13,16,17). In combination with the natural history component, this model was used in our studies to describe the U.S. national trend in breast cancer mortality from 1975 to 1999. The observed mortality trend is consistent with the assumption that there has been a relatively long history of incremental improvements in breast cancer treatment. The conjecture is corroborated by a recent analysis of breast cancer mortality in the United Kingdom (18) indicating that the mortality rate began to decrease before the start of mammographic screening. The model has proven to be adequate for the complex phenomena that so far have been explored in relation to cancer incidence and mortality. This statement, however, does not mean that our model is either perfect or universal; it may call for further modifications if future applications so require. We believe that improved models of cancer screening can be developed in the future through including more components (in addition to age and tumor size) in the vector of clinical covariates accessible to measurement at the time of diagnosis. See (2) for the general idea and associated analytical techniques. MODELING THE NATURAL HISTORY OF BREAST CANCER Our approach attempts to implement the following concept formulated by Albert et al. (19): More realistic models for tumor detectability can be synthesized by first modeling the behavior of tumor growth over time and superimposing a model for detection probability as a function of tumor size. This idea was set forth in the so-called quantal response model of tumor detection developed by Bartoszyński and other authors in several publications [see (20) for details and references]. Below we outline the most basic features of the class of quantal response models of cancer detection combined with mechanistically motivated models of carcinogenesis. A Stochastic Model of Tumor Latency The latent period of tumor development can be broken down into three stages: formation of initiated cells, promotion of initiated cells resulting in the first malignant clonogenic cell, subsequent tumor growth and progression until the event of detection occurs. Let T be the age of an individual at tumor onset, and W the time of spontaneous tumor detection (in the absence of screening) measured from the onset of disease. We use a two-stage stochastic model of carcinogenesis proposed by Moolgavkar et al. (21–23) to specify the probability density function, pT (t), of the random variable T. The model is given by the following survival function  ${\bar{F}}_{T}(t):{=}{{\int}_{t}^{{\infty}}}p_{T}(u)du{=}\left[\frac{(A{+}B)e^{At}}{B{+}Ae^{(A{+}B)t}}\right]^{\mathrm{{\rho}}},{\ }t{\geq}0,$ [10.1]from which the density pT (t) can be derived. Here A, B, ρ > 0 are identifiable parameters of the model (24–26). Formula [10.1] specifies the distribution of the duration of the first two stages of carcinogenesis. The process of initiation is usually modeled as a Poisson process. Here the parameter ρ is the ratio of the initiation rate and the rate of proliferation of initiated cells, whereas A and B are parameters of the promotion time distribution. This model has proven to provide a good fit to diverse data on animal and human carcinogenesis [see (27) for goodness-of-fit testing]. Introduce a random variable S to represent tumor size (the number of cells in a tumor) at spontaneous detection. Suppose that the law of tumor growth is described by a deterministic function $$\mathit{f}:[0,{\infty}){\rightarrow}[1,{\infty})$$ with f (0) = 1, so that S = f (W). It is assumed also that the random variables T and W are absolutely continuous and stochastically independent; the function f is differentiable and f ′ > 0; and the hazard rate for spontaneous tumor detection is proportional to the current tumor size with coefficient α > 0. It follows from the above assumptions that  $p_{W}(w){=}\mathrm{{\alpha}}f(w)e^{{-}\mathrm{{\alpha}}{{\int}_{0}^{w}}f(u)du},{\ }w{\geq}0.$ Therefore,  $p_{S}(s){=}\mathrm{{\alpha}}sg{^\prime}(s)e^{{-}\mathrm{{\alpha}}{{\int}_{0}^{g(s)}}f(u)du},{\ }s{\geq}1,$ where g stands for the inverse function for f : g = f−1. For deterministic exponential tumor growth with rate λ > 0 (f (w) = eλw), we have  \begin{eqnarray*}&&p_{S}(s){=}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}e^{{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(s{-}1)},{\ }p_{W}(w){=}\mathrm{{\alpha}}e^{\mathrm{{\lambda}}w{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}w}{-}1)},\\&&s{\geq}1,{\,}w{\geq}0.\end{eqnarray*} [10.2] Here tumor size at detection S follows a translated exponential distribution with parameter α/λ, whereas the distribution of age at detection measured from the disease onset is a Gompertz distribution. The random variable W has the same meaning as the preclinical stage duration within the traditional approach to modeling the natural history of cancer. Consider the random vector Y: = (T + W, S), which components are interpreted as age and tumor size at diagnosis, respectively. The probability density function of Y is given by  $p_{Y}(u,s){=}p_{T}(u{-}g(s))p_{S}(s),{\ }u{\geq}g(s),s{\geq}1.$ This distribution is identifiable (28). Remark 1. A distribution P(x;θ), where θ is a vector of parameters, is said to be identifiable if from the equality  $P(x;\mathrm{{\theta}}_{1}){=}P(x;\mathrm{{\theta}}_{2})$ valid for all x it follows that θ1 = θ2. For exponential tumor growth, we obtain  $p_{Y}(u,s){=}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}e^{{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(s{-}1)}p_{T}(u{-}\frac{\mathrm{ln}s}{\mathrm{{\lambda}}}),{\ }u{\geq}0,{\,}1{\leq}s{\leq}e^{\mathrm{{\lambda}}u}.$ In practice, it is not the number of tumor cells S that is observable but the volume, V, in appropriate units, and one needs to change variables using the equality S = V/γ, where γ is the volume of one tumor cell $$({\gamma}{\simeq}10^{{-}9}cm^{3}).$$ The parameter γ has only a scaling effect on the distribution of tumor volume. The most straightforward generalization of the model can be accomplished by assuming that some of its parameters are random. In particular, suppose that 1/λ is gamma distributed with parameters a and b. Randomness of λ is reflective of the individual variability of tumor growth rate. Then we have  \begin{eqnarray*}&&p_{Y}(u,s){=}\frac{\mathrm{{\alpha}}b^{a}}{(\mathrm{ln}s)^{a{+}1}{\Gamma}(a)}{{\int}_{0}^{u}}(u{-}x)^{a}\\&&{\times}\mathrm{exp}{\{}{-}\frac{b{+}\mathrm{{\alpha}}(s{-}1)}{\mathrm{ln}s}(u{-}x){\}}p_{T}(x)dx,\end{eqnarray*} [10.3]for u ≥ 0, s ≥ 1. Here the marginal distribution of tumor size is a Pareto distribution (20). As shown by our analysis of the Utah Population Data Set, the distribution [10.3] provides a good fit to cohort data on breast cancer (29). Modeling Impact of Screening on Natural History of Breast Cancer Let 0 < τ1 < τ2 < … < τn be a given screening schedule. It is convenient to set τ0: = 0 and τn+1: = ∞. Let W0 be the time of spontaneous detection (incident and interval cases) and W1 the time of screening-based detection, both times being measured from the disease onset. Then for the time W of combined detection we have W = min(W0, W1). It is natural to assume that the random variables W1 and W0 are conditionally independent given the onset time T. This assumption is plausible for deterministic tumor growth if we hypothesize also that W0 and T are independent and that the hazard rate for the distribution of W0 as well as the discrete hazard rate of tumor detection at a medical exam (provided the previous exams did not detect the tumor) given T are both proportional (with different coefficients of proportionality α0 and α1, respectively) to the current tumor size. Then we have for u ≥ 0, s ≥ 1:  \begin{eqnarray*}&&F_{Y}(u,s){=}\mathrm{Pr}(T{+}W{\leq}u,f(W){\leq}s)\\&&{=}{{\int}_{0}^{u}}F_{W{\vert}T{=}t}(\mathrm{min}{\{}u{-}t,g(s){\}})f_{T}(t)dt,\end{eqnarray*} [10.4]where  $F_{W{\vert}T{=}t}(w){=}1{-}e^{{-}[\mathrm{{\alpha}}_{0}{\Phi}(w){+}\mathrm{{\alpha}}_{1}{\sum}_{k{=}i{+}1}^{j}f(\mathrm{{\tau}}_{k}{-}t)]}.$ It can be proven that under mild conditions the distribution of the random vector Y is identifiable (30). Unfortunately, no identifiability results are available for its randomized versions because of prohibitive complexity of the analytic expression for this distribution even for exponential tumor growth with random growth rate. The same is true for the joint distribution given by formula [10.3]. However, our simulation experiments have shown that identifiability of the model is preserved if the compounding procedure uses the gamma distribution for λ or its reciprocal. Remark 2. It is tempting to generalize the model by making the growth rate (or the preclinical stage duration) dependent on the age of a patient at the time of tumor onset. There have been attempts to incorporate this element into models of the natural history of cancer. However, the fact that no tangible birth cohort effect on the nonparametrically estimated distribution of tumor size at detection is seen in a cohort study (29) is consistent with stochastic independence of tumor growth rate of the age at tumor onset. The same argument applies equally to the sensitivity parameter α1. There are indications in the literature (31) that the sensitivity of mammography increases with age. However, the tendency does not appear to be strong enough to manifest itself in the type of data we deal with in this paper. Next we need a formula for the probability of detection at a given screen. Let τi ≤ t < τi+1, 0 ≤ i ≤ n. For 0 ≤ i ≤ n − 1 and i + 1 ≤ k ≤ n, define the probability pt(k): = Pr(W1 = τk − t|T = t) of tumor detection at the kth screen given the cancer onset at moment t, and by $$p_{t}({\infty}){=}1{-}{\sum}^{n}_{k{=}i{+}1}p_{t}(k)$$ the corresponding conditional probability that tumor is not detected by screening. Introduce a discrete analogue of the conditional (given T = t) hazard rate for the screening based detection as  $\mathrm{{\mu}}_{t}{=}{{\sum}_{k{=}i{+}1}^{n}}r_{t}(k)\mathrm{{\delta}}_{\mathrm{{\tau}}_{k}{-}t},$ where δx stands for the Dirac measure at x, and the sum over the empty set of indices is set, as usual, to be zero. Then the following formula holds (32):  $p_{t}(k){=}e^{{-}{\sum}_{j{=}i{+}1}^{k{-}1}r_{t}(j)}[1{-}e^{{-}r_{t}(k)}],{\ }i{+}1{\leq}k{\leq}n.$ Observe that this holds true for all k = 1, …, n, if we set pt(k) = rt(k) = 0 for 1 ≤ k ≤ i. Assuming also that the conditional discrete rate of screening based detection is proportional to the current tumor size: rt(k) = α1S(τk − t), i + 1 ≤ k ≤ n, α1 > 0, we have  $p_{t}(k){=}e^{{-}{\sum}_{j{=}i{+}1}^{k{-}1}\mathrm{{\alpha}}_{1}S(\mathrm{{\tau}}_{j}{-}t)}[1{-}e^{{-}\mathrm{{\alpha}}_{1}S(\mathrm{{\tau}}_{k}{-}t)}],{\ }i{+}1{\leq}k{\leq}n.$ [10.5] Estimation of Model Parameters When considering a study design typical for randomized screening trials it is possible to derive the likelihood function on the basis of the observations of tumor size and age at diagnosis. Let τ = {τ1 < τ2 < … < τn} be a sequence of screening ages. We proceed from the model assumptions introduced earlier with an arbitrary probability density function pL(x) for the rate, λ, of tumor growth. The model is formulated in terms of tumor size and age at detection denoted by S and U, respectively. The contributions to the likelihood function are given by the following formulas (30):Here F̄T is the survival function of the onset time, and τk ≤ c < τk+1, 0 ≤ k ≤ n. Since age at enrollment varies among study subjects, the above formulas must be modified in the usual way in order to incorporate left random truncation (33). For interval and incident cases, the contribution of an observation (u, s) is equal to the value at (u, s) of the joint p.d.f. of age and tumor size at spontaneous (clinical) detection:  \begin{eqnarray*}&&p_{1}(u,s){=}\mathrm{{\alpha}}_{0}{{\int}_{0}^{u{-}\mathrm{{\tau}}_{k}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}x}{\mathrm{ln}s}(s{-}1)]{\}}p_{T}(u{-}x)p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\\&&{\times}\frac{dx}{x}{+}\mathrm{{\alpha}}_{0}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{u{-}\mathrm{{\tau}}_{i{+}1}}^{u{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}[\frac{\mathrm{{\alpha}}_{0}x}{\mathrm{ln}s}(s{-}1){+}\mathrm{{\alpha}}_{1}s\\&&{\times}{{\sum}_{j{=}i{+}1}^{k}}e^{\frac{\mathrm{ln}s}{x}(\mathrm{{\tau}}_{j}{-}u)}]{\}}p_{T}(u{-}x)p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\frac{dx}{x},\end{eqnarray*} [10.6]where τk ≤ u < τk+1, 0 ≤ k ≤ n. For screen-detected cases, the contribution of an observation (τk, s), 1 ≤ k ≤ n, equals the value of the p.d.f. of tumor size at detection on the kth screen:  \begin{eqnarray*}&&p_{2}(\mathrm{{\tau}}_{k},s){=}\frac{1{-}e^{{-}\mathrm{{\alpha}}_{1}s}}{s}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{i{+}1}}^{\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}(s{-}1)x}{\mathrm{ln}s}{\}}\\&&{\times}\mathrm{exp}{\{}{-}\mathrm{{\alpha}}_{1}s{{\sum}_{j{=}i{+}1}^{k{-}1}}e^{{-}\frac{\mathrm{ln}s}{x}(\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{j})}{\}}p_{T}(\mathrm{{\tau}}_{k}{-}x)\\&&{\times}p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\frac{dx}{x},{\ }1{\leq}k{\leq}n.\end{eqnarray*} [10.7] For censored observations, the contribution of an observation c equals $${\bar{F}}_{U}(c),$$ where U is the age at combined tumor detection:  \begin{eqnarray*}&&p_{3}(c){=}{\bar{F}}_{T}(c){+}{{\int}_{0}^{{\infty}}}{{\int}_{0}^{c{-}\mathrm{{\tau}}_{k}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}x}{-}1){\}}\\&&{\times}p_{T}(c{-}x)p_{{\wedge}}(\mathrm{{\lambda}})dxd\mathrm{{\lambda}}\\&&{+}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{0}^{{\infty}}}{{\int}_{c{-}\mathrm{{\tau}}_{i{+}1}}^{c{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}[\frac{\mathrm{{\alpha}}_{0}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}x}{-}1)\\&&{+}\mathrm{{\alpha}}_{1}e^{{-}\mathrm{{\lambda}}(c{-}x)}{{\sum}_{j{=}i{+}1}^{k}}e^{\mathrm{{\lambda}}\mathrm{{\tau}}_{j}}]{\}}p_{T}(c{-}x)p_{{\wedge}}(\mathrm{{\lambda}})dxd\mathrm{{\lambda}},\end{eqnarray*} [10.8] We estimated all parameters incorporated into the model from individual data generated by the Canadian National Breast Screening Studies (CNBSS). This study consists of two screening trials individually randomized and conducted during 1980–96 and monitored to 1996 in 15 centers in Canada. Both trials were coordinated at the University of Toronto by a study team directed by one of the coauthors of this paper (Miller). The first study (CNBSS-1) included 50 430 women aged 40–49 years on study entry and evaluated the efficacy of annual mammography, breast physical examination, and breast self-examination instruction (BSE) in reducing breast cancer mortality (34). In the mammography plus physical examination group, 62% of women received five annual screens, including two-view mammography, physical examination, and BSE. The remaining women, recruited later, received four annual screens. The usual care group were not recalled for rescreening after their first visit, when they had a breast physical examination plus BSE, but were mailed annual questionnaires. The second study (CNBSS-2) included 39 405 women age 50–59 years on study entry and evaluated the contribution of annual mammography over and above annual physical examination of the breasts plus BSE in the reduction of mortality from breast cancer (34). Women were randomized to receive annual mammography and physical examination plus BSE or annual physical examination plus BSE only for a total of five or four screens. For both trials, center coordinators conducted the randomization using allocation lists prepared by the central office. Randomization was independent of physical examination findings. The center coordinators collected surgery and pathology reports for all breast diagnostic and therapeutic procedures. CNBSS pathologists reviewed all slides. If the community and CNBSS pathologist disagreed, a panel of three to five CNBSS pathologists conducted a blind and independent review. Extensive quality-control procedures were carried out while data collection was in progress. After the screening centers closed in 1988, all women known to have breast cancer were followed up annually by the CNBSS central office until June 30, 1996. Passive follow-up of all participants through linkage with the National Cancer Registry identified new diagnoses of breast cancer in study participants to December 31, 1993. The central office collected pathology reports for postscreen breast cancers. For these, the community diagnosis was accepted for study purposes. Deaths that occurred before a participant's screening schedule was completed were identified by family members in response to the annual mailed questionnaire. Attending physicians, who received annual requests for information on women with breast cancer, reported deaths to June 30, 1996. Linkage with the CMDB (including deaths in Canadians resident in the United States at time of death) identified causes of death in the entire cohort to December 31, 1993. Independent reviewers, blind as to allocation, reviewed clinical records and classified the underlying cause of death. In CNBSS-1, a total of 592 invasive and 71 in situ breast cancers were diagnosed by December 31, 1993, in the mammography plus physical examination group, compared with 552 and 29, respectively, in the usual care group. Of these, 208 and 58, respectively, were screen detected. At 7 years, there were 38 breast cancer deaths in the mammography plus physical examination group and 28 in the usual care group, for a rate ratio of 1.36 (35). At the 11- to 16 (average 13)-year follow-up, there were 105 and 108 breast cancer deaths, respectively, for a rate ratio, adjusted for mammograms performed outside the CNBSS, of 1.06 (36). In CNBSS-2, a total of 622 invasive and 71 in situ breast carcinomas were ascertained in the mammography plus physical examination group, and 610 and 16 in the physical examination only group. Of these, 267 and 148, respectively, were screen detected. At 7 years there were 38 and 39 deaths from breast cancer in the respective groups for a rate ratio of 0.97 (37). At 11–16 years there were 107 and 105 deaths from breast cancer in the respective groups for a rate ratio of 1.02 (38). Information on tumor size and clinical stage at diagnosis is available in both datasets. Maximization of the likelihood given by [10.6]–[10.8] is a challenging problem because it involves many time-consuming computations. This statement is especially true for many (tens of thousands) double integrals [10.8] representing the contributions of censored observations, for censoring is heavy in this kind of study. Therefore, we resorted to simulations to estimate the contributions of censored data rather than evaluate these integrals numerically. The simulation model described in the next section was used for this purpose. The survival function F̄U (for censored observations) and the p.d.f. pU (for missing tumor size) were estimated nonparametrically from the simulated data. There is always a certain level of random noise in the simulated likelihood, calling for stochastic approximation methods to find a maximum of its expected value. Therefore, we used the Kiefer–Wolfowitz procedure (39) to obtain maximum likelihood estimates. Unfortunately, when applied to the log-likelihood function the Kiefer–Wolfowitz procedure may result in biased estimates and one needs to generate extremely large simulation samples to keep this bias to a minimum. For this reason, we provided 105 simulated samples when estimating F̄U(ui) and 5 × 105 samples when estimating pU(ui) per each iteration of the Kiefer–Wolfowitz procedure in the likelihood inference from the Canadian screening trials. In a separate set of simulation experiments, we assured ourselves that this sample size was sufficient for obtaining stable results. The contributions of exact observations were computed numerically in accordance with formulas [10.6] or [10.7]. In our analysis of the CNBSS data, we assumed that the reciprocal of Λ in formulas [10.6]–[10.8] is gamma distributed with mean μ and standard deviation σ. Since there are three modes of breast cancer detection in the trial, we extended the likelihood function to incorporate three sensitivity parameters: α0, α1, and α2 for spontaneous (clinical) detection, mammography combined with physical exam, and physical exam only, respectively. From age at enrollment one can pick out four distinct birth cohorts in the CNBSS data. Each of the four cohorts is composed of women born within the following 5-year intervals: 1921–25, 1926–30, 1931–35, and 1936–40, respectively. If we allow the parameter ρ to vary among the different birth cohorts, there will be four parameters ρ1, ρ2, ρ3, ρ4 forming the proportional hazards structure of the onset time distribution. We made these parameters responsible for the birth cohort effect as suggested by Boucher and Kerber (40). The remaining two parameters A and B (see formula [10.1]), along with the parameters μ, σ, α0, α1, and α2, are assumed to be common to all birth cohorts. Therefore, there are 11 parameters in total to be estimated from the CNBSS data by the method of maximum likelihood. Our procedure resulted in the following maximum likelihood estimates (MLE) of ρ1, ρ2, ρ3, ρ4: $$\mathrm{{\hat{{\rho}}}}_{1}{=}0.059,{\,}\mathrm{{\hat{{\rho}}}}_{2}{=}0.063,{\,}\mathrm{{\hat{{\rho}}}}_{3}{=}0.084,{\,}\mathrm{{\hat{{\rho}}}}_{4}{=}0.09.$$ These estimates indicate that breast cancer risk tends to increase over the time range covered by the birth cohorts under study. The MLEs of the parameters A, B, μ, σ and their 95% confidence intervals are given in Table 2. The construction of approximate confidence intervals is based on asymptotic normality of maximum likelihood estimators. The MLEs of α0, α1, and α2 are equal to 7.31 × 10−10, 4.82 × 10−9, and 1.34 × 10−9, respectively, with the corresponding confidence intervals: (6.91 × 10−10 to 7.72 × 10−10), (4.45 × 10−9 to 5.18 × 10−9), and (1.24 × 10−9 to 1.44 × 10−9). There is a more than threefold difference in the sensitivity parameters associated with mammography combined with physical exam (α1) and with physical exam alone (α2), respectively. Table 2.  Maximum likelihood estimates of model parameters with asymptotic 95% confidence intervals $${\hat{A}}$$   $${\hat{B}}$$   $$\mathrm{{\hat{{\mu}}}}$$   $$\mathrm{{\hat{{\sigma}}}}$$   1.112 × 10−4  0.1203  0.526  0.531  (1.066 × 10−4, 1.158 × 10−4)  (0.1192, 0.1214)  (0.514, 0.537)  (0.489, 0.574)  $${\hat{A}}$$   $${\hat{B}}$$   $$\mathrm{{\hat{{\mu}}}}$$   $$\mathrm{{\hat{{\sigma}}}}$$   1.112 × 10−4  0.1203  0.526  0.531  (1.066 × 10−4, 1.158 × 10−4)  (0.1192, 0.1214)  (0.514, 0.537)  (0.489, 0.574)  View Large Randomized screening trials are especially well suited for estimation of model parameters in a statistically sound way, because such studies generate individual data on age and tumor size at detection. However, it is the objective of our study to provide a means of explanatory and predictive inference at the population level. We use the CNBSS data to estimate parameters associated with the four birth cohorts identified through the Canadian studies, keeping in mind that some of these parameters may still be adjusted when an additional calibration of the model is necessary (see “Model Validation”). The CNBSS do not provide any information on women born before 1921 and after 1940, so that the birth cohort effect cannot be estimated in terms of the parameter ρ from this data set beyond 1921–40. To surmount this difficulty, the parameters ρ for other birth cohorts were calculated by using the rate ratios that resulted from the age–cohort model (1,41–43). In doing so, we define ρ1 associated with the 1921–25 cohort as a new baseline parameter while retaining the ratios ρ2/ρ1, ρ3/ρ1, and ρ4/ρ1 suggested by the analysis of the CNBSS data. The corresponding ratios for other birth cohorts are given by the analysis based on the age–cohort model. All birth cohorts were grouped in 5-year intervals and the estimated rate ratios were applied to the midpoints of these intervals. The estimated values of the parameter ρ for the various birth cohorts are shown in Fig. 1. Fig. 1. View largeDownload slide Estimated values of the parameter ρ as a function of birth cohort. Fig. 1. View largeDownload slide Estimated values of the parameter ρ as a function of birth cohort. Remark 3. By no means can the age–period–cohort model replace or be superior to mechanistically motivated models, even when modeling cancer incidence in the absence of screening, because its structure is rather rigid, being completely determined by the assumption of proportionality of risks (rates). Besides, the relative risk variance tends to increase in calendar time due to an increasing extent of truncation of the baseline rate for late cohorts to eliminate the effect of screening. A Simulation Model Although many characteristics of the above-described model of the natural history of breast cancer can be derived analytically, we developed its simulation counterpart on which to explore the behavior of the basic model under various theoretical scenarios. This simulation model is easier to handle when comparing modeling results with epidemiological indicators in population settings. Another advantage of the simulation approach is that the software can be more readily modified when new elements, such as sensitivity thresholds, need to be incorporated into the basic model structure. Also, the simulation model makes it easier to calculate such important characteristics as the mean lead time (and the corresponding variance) and program sensitivity. The simulation model generates individual histories of cancer development and detection for each birth cohort in accordance with the postulates formulated in “A Stochastic Model of Tumor Latency” and “Modeling the Impact of Screening on the Natural History of Breast Cancer”. The time of tumor onset was generated according to the distribution given by formula [10.1], whereas for the preclinical stage duration W0 the Gompertz distribution given by the second formula in [10.2] was used. The reciprocal of the growth rate was generated from a two-parameter gamma distribution. The effect of screening was modeled as described in “Modeling the Impact of Screening on the Natural History of Breast Cancer” (see “Mammographic Screening” for more details) with the probabilities of detection at the kth screen specified by formula [10.5]. The information on age and tumor size was retrieved after each event of either screen-based or spontaneous detection. The probabilistic characteristics of interest were estimated nonparametrically from the simulated data. The code was written in PASCAL DELPHI. Breast cancer incidence. It is important to make inferences in terms of a characteristic of the model that can be estimated in the presence of data censoring. In a cohort setting, the most natural characteristic to be modeled is the hazard rate h(x) as a function of age x at cancer detection. Under the model of independent censoring, the function h(x) can be estimated from real or simulated data so that the resultant estimate does not depend on competing mortality. Let Uj = [xj−1, xj), then the life-table type estimator of h(x) is given by  ${\hat{h}}(x_{j}){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{events}{\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}}{\mathrm{number}{\,}\mathrm{at}{\,}\mathrm{risk}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}},$ [10.9]so that there is no need to model competing risks explicitly. This is a distinct advantage of this indicator, because invoking independent information on competing mortality would induce an additional random noise in the epidemiological characteristic to be estimated. The estimator for h(x) has desirable asymptotic properties: It is consistent and efficient. The same estimator can be used for mortality. In a population setting, the hazard rate becomes time dependent and needs to be generalized leading to the notion of composite hazard. Let hi(x) be the hazard function for the ith cohort and t be the calendar year. The composite hazard hC(x, t) is defined as  $h^{C}(x,t){=}h_{t{-}x}(x).$ [10.10] Therefore, a pertinent estimator for hC(x, t) is  ${\hat{h}}^{C}(x,t){=}{\hat{h}}_{t{-}x}(x).$ [10.11] The empirical counterpart of hC(x, t) is  $I(x_{j},t){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{events}{\,}(\mathrm{cases}){\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}{\mathrm{number}{\,}\mathrm{at}{\,}\mathrm{risk}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}.$ [10.12] The commonly used indicator (age-specific incidence) is calculated as  $I{\ast}(x_{j},t){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{new}{\,}\mathrm{cases}{\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{alive}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}.$ [10.13] In addition to the risk set, the denominator of [10.13] counts those persons in the age group Uj who have been diagnosed with cancer but are still alive in calendar year t. The estimator I* depends on the effects of data censoring (competing mortality), and there is no meaningful probabilistic characteristic for which the statistic I*(xj, t) could be an unbiased estimator. If one uses I*(xj, t) as an estimator for hC(x, t), the bias remains unknown. However, formulas [10.12] and [10.13] are expected to be numerically close to each other, and for this reason we believe that for all practical purposes the estimator I(xj, t) is well approximated by I*(xj, t). For model calibration and validation, we use the incidence I*(xj, t) and its age-adjusted (to the 2000 U.S. standard population) counterpart as meaningful summary characteristics of the SEER data. The age-adjusted true incidence is defined as  $r(t){=}{{\int}}h^{C}(x,t)\mathrm{{\omega}}_{0}(x)dx,$ [10.14]where ω0(x) is the age distribution in the standard population. When estimating r we replace hC with I*. A model of breast cancer survival. To model mortality rates, we proceed from the following regression model (13–15,17) that relates the survival function, $${\bar{G}},$$ of the postdiagnosis survival time to the values of clinical covariates (age, stage, tumor size) represented by vector z:  ${\bar{G}}(t{\vert}\mathbf{\mathrm{{\beta}}},\mathbf{\mathrm{z}}){=}\mathrm{exp}[{-}\mathrm{{\theta}}(\mathbf{\mathrm{{\beta}}}_{1},\mathbf{\mathrm{z}}){\{}1{-}{\bar{F}}(t)^{\mathrm{{\eta}}(\mathrm{{\beta}}_{2},\mathrm{\mathbf{z}})}{\}}],$ [10.15]where β = (β1, β2), β1 and β2 are vectors of regression coefficients, Ḡ is an arbitrary survival function, and the functions θ and η are each of the form exp(β′z). Formula [10.15] is a natural generalization of the proportional hazards (PH) model with cure (13,15); the latter is a special case of [10.15] with η = 1. A distinct advantage of this model is that each covariate may exert its effect both on long-term survival through θ (z) and on short-term survival through η(z); this effect explains its higher flexibility compared with that of the traditional PH model. The need for extension [10.15] of the PH model is motivated by the fact that the original PH model does not provide a good description of breast cancer survival (9,13,16,44,47). Within a semiparametric framework, the baseline function F̄ is treated as a step function (with jumps at the observed failure times) which is set at zero at the point of last observation. Efficient algorithms are available to fit the semiparametric model [10.15] to survival data (13–15). The model [10.15] has proven to provide an excellent fit to data on breast cancer (13,16,47) and prostate cancer (17) survival. The regression coefficients incorporated into θ (z) and η (z) were estimated from the SEER data by using an algorithm proposed by Tsodikov (13); their numerical values are given in Table 3. In this analysis, we used survival data on more than 165 000 patients diagnosed with breast cancer since 1988. This subset was chosen because it provides the information on tumor size at diagnosis needed for our analysis. Similar estimates of the regression coefficients were obtained when the baseline function F was approximated by a two-parameter Weibull distribution. Table 3.  Regression coefficients estimated from the SEER data on breast cancer survival* Covariate  Coefficient for θ (z)  Coefficient for η (z)  Baseline  β11 = −2.11  β21 = 0  Tumor size  β12 = 3.74 × 10−4  β22 = 6.27 × 10−4  Age at diagnosis  β13 = 5.16 × 10−6  β23 = 5.33 × 10−4  Stage, regional  β14 = 1.30  β24 = 0.41  Stage, distant  β15 = 2.38  β25 = 1.18  Covariate  Coefficient for θ (z)  Coefficient for η (z)  Baseline  β11 = −2.11  β21 = 0  Tumor size  β12 = 3.74 × 10−4  β22 = 6.27 × 10−4  Age at diagnosis  β13 = 5.16 × 10−6  β23 = 5.33 × 10−4  Stage, regional  β14 = 1.30  β24 = 0.41  Stage, distant  β15 = 2.38  β25 = 1.18  * SEER = Surveillance, Epidemiology, and End Results. View Large In the simulation counterpart of our model, we generated a random variable, M, from the conditional survival function [10.15] for each set of covariates produced by the model of cancer detection (age, tumor size, clinical stage), with parameter values estimated from the Canadian studies (after a pertinent calibration of the model), so that the lifetime of each individual is equal to U + M. We did not analyze the CNBSS survival data because of their scarcity. The basic probabilistic characteristics of breast cancer mortality (such as the hazard rate) were estimated nonparametrically from the sample of simulated times U + M. Mammographic screening. Although the model of breast cancer screening was described in sufficient detail earlier, a few further comments are in order here. To specify the initial value of the sensitivity parameter α1 for the base case, we proceeded from its estimate obtained from the CNBSS data on the combined mode of detection, i.e., mammography and physical exam, because in real practice the two medical procedures frequently come together. To make the model of screening more realistic, we introduced threshold values for detectable volumes of tumors. The threshold volume for screening based detection was set at 0.004 cm3, which is the minimum volume observed among screen-detected tumors in the CNBSS dataset. Similarly, a threshold of 0.014 cm3 for spontaneous detection was determined from the CNBSS data after eliminating four smallest values suspected as likely outliers. However, the net results of modeling epidemiological descriptors are not perceptibly affected by the above thresholds. Individual schedules of mammographic examinations were modeled using the dissemination model developed by the NCI. This software generates a screening schedule for each individual pertaining to a given birth cohort. In addition to this sequence of screening ages, each individual history of breast cancer includes random variables T and Λ, as well as the times W0 and W1 of spontaneous and screen-based detection, respectively. Both random variables W0 and W1 are measured from the time of tumor onset. Given T, Λ, and an individual screening schedule τ1, τ2, …, τn, the random variables W0 and W1 are generated using the second of formulas [10.2] and formula [10.5], respectively, which gives a sample value of W = min(W0, W1). The components W0, W1 determining the actual age at detection are only conditionally independent, given the time T of tumor onset. Therefore, these components cannot be manipulated independently to achieve a better fit to the observed data. Once the age at tumor detection U = T + W has been determined, a check is made as to whether its value exceeds the maximum allowable age in a given cohort. If it does not, the size of the detected tumor is recorded. Thus, the output of our simulations is represented by the pairs U, S. The clinical stage (local, regional, distant) is generated conditionally on this output from a distribution estimated from the SEER data, yielding triples of quantities that are necessary to construct the most basic epidemiological indicators. MODELING EFFECTS OF TREATMENT As described earlier in this report, the effect of early detection on mortality was modeled through the regression coefficients β1and β2 characterizing the contributions of age, tumor size, and clinical stage to short- and long-term survival effects, respectively. Maximum likelihood estimates of these coefficients were obtained from the SEER data on postdetection survival of patients with breast cancer diagnosed after 1988. This time interval is characterized by a widespread use of novel modes of adjuvant therapy for breast cancer, first and foremost of those associated with the advent of tamoxifen. When modeling the base case, however, one needs to cover the whole interval between 1975 and 1999. Therefore, using the coefficients β1 and β2 thus estimated would result in a significantly lower mortality than that was actually observed. The SEER data do not provide the necessary information on breast cancer treatment so that the effect of tamoxifen and other advancements in breast cancer treatment has to be modeled by the indirect route. One way of doing this is to calibrate the model by introducing two additional time-dependent covariates zθ and zη and the corresponding scaling parameters exp(cθzθ) and exp(cηzη) that modify the short- and long-term survival effects by acting multiplicatively on the functions $$\mathrm{{\theta}}(\mathbf{\mathrm{{\beta}}}_{1},\mathbf{\mathrm{z}}){=}\mathrm{exp}(\mathbf{\mathrm{{\beta}}}{^\prime}_{\mathbf{1}}\mathbf{\mathrm{z}})$$ and $$\mathrm{{\eta}}(\mathbf{\mathrm{{\beta}}}_{2},\mathbf{\mathrm{z}}){=}\mathrm{exp}(\mathbf{\mathrm{{\beta}}}{^\prime}_{\mathbf{2}}\mathbf{\mathrm{z}})$$ in formula [10.15]. The effect of treatment on breast cancer mortality needs to be modeled as a function of calendar time, t, to reflect the dissemination of tamoxifen and other therapy improvements. To retain identifiability of the model, we assume that there is a change point t0 (calendar year) so that zθ = 1, zη = 1 for t < t0 and zθ = zη = 0 for t ≥ t0. Thus, we introduce the simplest stepwise dependence of the effect of treatment on calendar time. This model will be referred to as Model 1. This gives us three more parameters cθ, cη and t0 to calibrate the model of breast cancer mortality. The rationale for model calibration is discussed in the next section. Using the SEER data we obtained the following least squares estimates: cθ = 2.65, cη = −3.05 and t0 = 1980. These values provide a reasonably good fit to the observed breast cancer mortality over 1975–1999 (Fig. 4), and at the same time they serve as auxiliary quantitative characteristics of the contribution of therapy advancements (including tamoxifen) to breast cancer survival. In particular, extending the estimated values of cθ and cη to the period t ≥ t0 would yield a mortality rate that would be observed with no use of tamoxifen (no improvements in treatment), while setting cθ = cη = 0 for all t one could predict a mortality rate that would have been observed had the contemporary modes of treatment been in effect since the beginning of the twentieth century. A more realistic description of the observed mortality trend can be provided by introducing a gradual advent of improved treatments (better surgical procedures, improved irradiation regimens, adjuvant chemotherapy, patient care, etc) that has begun before 1975. A simple model (Model 2) is derived by assuming that both zθ (t) and zη (t) are linearly decreasing functions such that zθ (t1) = zη (t1) = 1 and zθ (t2) = zη (t2) = 0, where t1 < 1975 and t2 > 1975. As shown in Fig. 5, a nearly perfect fit to the observed breast cancer mortality is provided by this model with t1 = 1960 and t2 = 1990. MODEL VALIDATION General Principles Assessing goodness of fit for the model described above is difficult because of its complex and multivariate structure. There are no theoretically based statistical methods of goodness-of-fit testing for the bivariate distribution given by formula [10.4], whereas resampling and cross-validation techniques are computationally prohibitive with a model of such complexity. Data censoring and truncation also stand in the way as far as the CNBSS data are concerned. The CNBSS data are heterogeneous with respect to individual screening schedules, which is why nonparametric estimators of such important quantities as the distribution of tumor size at detection or its mean value may be biased in finite samples, thereby causing more complications in goodness-of-fit testing. For all these reasons we use the base case only to validate the model. In doing so, the CNBSS data will serve as a training set, whereas the SEER data will be treated as a control sample. This validation design is typical of supervised learning methods. Unlike situations in discriminant analysis, where outcome variables are categorical, we have to compare two continuous functions representing parametric (model based) and nonparametric estimates of the epidemiological indicator of interest. Statistical goodness-of-fit tests are of little utility in comparing the expected values predicted by the model with the observed values of epidemiological indicators in the base case from the following considerations: In large-sample studies, goodness-of-fit tests may be overly conservative rejecting any reasonable (no model is perfect) model. Even if one is prepared to assume the Poisson error structure [which is not a plausible hypothesis in the presence of screening (2)], it is still extremely difficult to make use of asymptotic results for the sampling distribution of a statistic based on residuals, because the parameters are not estimated from the same data. For example, the asymptotic sampling distribution of the chi-square statistic becomes complicated when a distribution with parameters estimated from one set of data is tested for goodness of fit to some other set (45). Therefore, we rely on graphical methods based on residuals characterizing the discrepancy between the observed population-based indicators (rates) and their values predicted by the best-fit model. When estimating model parameters from a given set of data, there is always a danger of overfitting, that is, fitting overly specific patterns that do not extend to new samples. This kind of overfitting has to do with model flexibility; it may manifest itself even if a model is identifiable and all its parameters are properly estimated [see (46) for discussion of the difference between the explained variation and predictive properties of a model]. The phenomenon of overfitting is also known in regression analysis as the shrinkage effect—which is why the model needs to be calibrated when tested against the control sample. Calibration should not end up with the reestimation of all parameters from the control data set; otherwise, no conclusion regarding predictive qualities can be made. In other words, a calibration procedure should be as parsimonious as possible. We require also that at least some predictions be made with no further calibration of the model. Calibration of Model There are two principles of parsimony we tried to follow in this work: Calibration may be applied to a given parameter if there are biological grounds to believe that this parameter may indeed vary between the two settings under comparison (e.g., variations in risk factors, sensitivity of screening procedures). A calibration procedure may also involve those parameters that cannot be estimated from the training set for the lack of relevant data. The number of parameters involved in calibration should be kept to a minimum. In our calibration procedure, the mean growth rate μ was fixed at its value of 0.526 obtained from the CNBSS data. However, we included σ in the procedure, because we expected more heterogeneity in the population-based SEER data than in the CNBSS dataset generated by controlled screening trials. The sensitivity parameters α0 and α1 may also vary between the two sets of data. To meet the second requirement, we can take advantage of some properties of the model described below. These properties have to do with a relatively low sensitivity of some epidemiological indicators to a certain subset of parameters. To calibrate the model we used the so-called incidence size distribution defined as follows. Let rj(t) be the age-adjusted incidence for the jth tumor size category (range), j = 1, …, k, then the incidence size distribution at time t is given by  $\mathrm{{\phi}}(j,t){=}\frac{r_{j}(t)}{{\sum}_{i{=}1}^{k}r_{i}(t)},$ [10.16]where t is calendar time. Our calibration procedure involves the following steps: Step 1. Since the distribution φ(j, t) for t = 1975 is practically insensitive to the parameters ρi characterizing the birth cohort effect (this conjecture was corroborated by computer simulations) and the effect of screening (reflected in the parameter α1) is expected to be negligibly small in 1975, we fit the model to the observed size distribution (three size categories) by minimizing the sum of squared residuals with respect to only two parameters: α0 and σ, while setting α1 = 0. Step 2. In 1999, we expect the size distribution to depend predominantly on α1. Therefore, we fit this distribution by the method of least squares, changing only α1 and setting the parameters α0 and σ at their values resulted from Step 1. Step 3. We repeat Step 1 with the newly estimated α1 and then proceed to Step 2. We alternate between the first two steps until a stable solution is obtained; just two iterations are normally needed to obtain such a solution. Clearly this algorithm can be improved by sequentially including more time points in the objective function when alternating between the two steps. In our preliminary studies, we used only the simplest version of the algorithm. Remark 4. The model-based marginal distribution of tumor size evaluated at a given time point (say, at t = 1975) is no longer a Pareto distribution even in the absence of screening, because it involves the condition that the age at detection does not exceed a certain value. This is all the more so for the distribution φ(j, t). Therefore, it is not recommended to use the Pareto approximation in Step 1 of the above algorithm. The above calibration procedure was applied to the SEER data on invasive breast cancer (excluding all in situ tumors) with all ages included in the adjustment of the age-specific incidence to the 2000 U.S. standard population. The resultant estimate σ = 0.60 indicates that the distribution of tumor growth rate may be slightly overdispersed. The estimate of σ is slightly larger than the maximum likelihood estimate of $$\mathrm{{\hat710{{\sigma}}}}{=}0.53$$ from the CNBSS data; the observed tendency is consistent with the fact that the SEER data are more heterogeneous than the CNBSS data. The sensitivity parameters α0 and α1 were estimated as 4.48 × 10−10 and 8.30 × 10−7, respectively. It is just natural that the calibrated parameter α0 tends to be slightly smaller than its maximum likelihood estimate obtained from the CNBSS, because all participants in the latter study received self-examination instruction. However, a much higher value of α1 still awaits interpretation (see “Discussion”). The comparison of the size distributions resulted from this procedure and their empirical counterparts is shown in Table 4. Table 4.  Model fit to the incidence size distribution*   1975     1999     Tumor size (diameter, cm)  Observed (%) (SEER)  Model (%)  Observed (%) (SEER)  Model (%)  <2  32.94  32.81  59.24  55.05  2–4.9  51.73  52.23  31.75  32.74  ≥5  15.27  14.96  9.01  12.21    1975     1999     Tumor size (diameter, cm)  Observed (%) (SEER)  Model (%)  Observed (%) (SEER)  Model (%)  <2  32.94  32.81  59.24  55.05  2–4.9  51.73  52.23  31.75  32.74  ≥5  15.27  14.96  9.01  12.21  * SEER = Surveillance, Epidemiology, and End Results. View Large The initial values of ρi (initiation rates) for each cohort were obtained by our analysis of the CNBSS data on age and tumor size at detection followed by the application of the age–cohort model. Almost no calibration (Kρ = 1.02, see below for definition) was necessary of the size-specific incidence (for tumors of known size) with respect to these parameters. Therefore, the results of our predictions pertaining to the size-specific incidence (see “Predictive Properties”) were effectively obtained using the maximum likelihood estimates of ρi for the four birth cohorts in the CNBSS data and the relative risks estimated under the age–cohort model. However, the situation is not the same for the total age-adjusted incidence that includes counts of tumors with missing size information. To predict this epidemiological descriptor, an additional calibration of the model with respect to ρi is absolutely necessary. This process amounts to imputation of missing data on the number of cases with unknown tumor sizes. Indeed, it is impossible for a model based on tumor size at detection to provide a description of the contribution of cases with unknown sizes to the overall incidence, because the model requires that the total age-adjusted incidence be equal to the sum of size-specific age-adjusted incidence curves. To keep the extent of this additional calibration to a minimum, all parameters ρi were multiplied by the same (independent of i) scaling factor, Kρ, chosen as a minimizer of the corresponding sum of squared residuals. This scaling procedure appears to have no tangible effect on the other parameters and on the quality of our predictions, so that no further tuning of the model is warranted. Thus, the parameter Kρ plays essentially the same role as the shrinkage factor in the predictive regression analysis. For the total age-adjusted incidence we report Kρ = 1.14, whose value was obtained by calibration of the model to fit the observed incidence that includes cases with unknown tumor size. The total age-adjusted incidence is not the only example where the calibration with respect to ρi may be required to compensate for missing information; there may be other cases (where the results of modeling are extrapolated to another data set) calling for such a calibration. Predictive Properties Now we can validate the model by predicting (with no further calibration or tuning) certain quantitative characteristics obtained from the SEER data. In particular, we would like to predict the dynamics of the following indicators: Size-specific (three size categories) and stage-specific age-adjusted (all ages) incidence curves as functions of calendar time as well as the total (excluding tumors of unknown size) age-adjusted incidence of malignant breast cancer over 1975–1999. Age-specific incidence for cases of invasive breast cancer with known tumor sizes. As is obvious from Figure 2, the model well describes the size-specific age-adjusted breast cancer incidence at fixed values of all parameters. Shown in Fig. 3 are the stage-specific (three stages) age-adjusted (all ages) incidence and the total (excluding unstaged tumors) age-adjusted incidence of malignant breast cancer. The mechanism generating missing data on tumor size is not purely random and appears to depend on calendar time, which is why we need to identify and eliminate such cases from the SEER data rather than attempting to model this mechanism. Fig. 4 shows sample predictions of the age-specific incidence (cases with known tumor size) which appear to be surprisingly good, given that the parameters α0 and α1 were held constant across all age groups and none of these curves was used for calibration; the model was calibrated only to the incidence size distribution at two time points. The results for other years (not shown because of space limitations) are similar. The only notable discrepancy observed in 1999 is somehow related to the fact that the age-adjusted incidence displays some sort of irregular behavior in the vicinity of this time point (see Fig. 2). Fig. 2. View largeDownload slide Predicting size-specific age-adjusted (all ages) breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. Fig. 2. View largeDownload slide Predicting size-specific age-adjusted (all ages) breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. Fig. 3. View largeDownload slide Predicting stage-specific age-adjusted (all ages) breast cancer incidence. Model predictions (solid lines); observed incidence curves (dashed lines). Only invasive tumors of known stage are included. Fig. 3. View largeDownload slide Predicting stage-specific age-adjusted (all ages) breast cancer incidence. Model predictions (solid lines); observed incidence curves (dashed lines). Only invasive tumors of known stage are included. Fig. 4. View largeDownload slide Predicting age-specific breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. The same notation as in Figs. 2 and 3 is used. Fig. 4. View largeDownload slide Predicting age-specific breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. The same notation as in Figs. 2 and 3 is used. Figure 5 shows how the model fits the observed breast cancer mortality. Although it is clear that breast cancer incidence continues to increase after 1975, the mortality curve is flat for a period of 15 years. It is impossible to explain these trends by screening by treatment interactions in view of the fact that such an effect may show up only after a time delay. In contrast, incremental improvements in therapy (before and after 1975) provide a likely explanation. It is seen from Fig. 5 that Model 2 improves the fit dramatically in comparison to Model 1 as far as the early portion of the mortality curve is concerned. Recall that Model 1 assumes a stepwise change in treatment efficacy occurring at some time point t0, while a more gradual (linear) trend is incorporated into Model 2. As is obvious from Fig. 5, the effects of screening by treatment interactions begin manifesting themselves in mortality after 1990. These results clearly demonstrate that the model captures the most salient features of the processes under study. Fig. 5. View largeDownload slide Age-adjusted (30–79 years) breast cancer mortality for the period between 1975 and 1999. SEER = Surveillance, Epidemiology, and End Results. Fig. 5. View largeDownload slide Age-adjusted (30–79 years) breast cancer mortality for the period between 1975 and 1999. SEER = Surveillance, Epidemiology, and End Results. DISCUSSION We begin our discussion by quoting Clayton and Schifflers (1): It is the purpose of statistical analysis to extract from research data the maximum information in as parsimonious and comprehensive manner as possible. Although absolutely valid, this statement places two conflicting requirements upon model-based statistical inference. For a model to be useful, its complexity must be adequate for the information contained in the data to be analyzed. A mathematical or simulation model whose parameters are not identifiable is of no use for data analysis, unless a proper reparameterization results in identifiable combinations of model parameters. If such combinations cannot be found, more sources of information need to be used to overcome this difficulty. Wherever possible, a theoretical proof of model identifiability should be provided. Alternatively, numerical or simulation studies are needed to show that the model is not overparameterized and is sensitive enough to parameter values to allow for estimation of its parameters from real data. A model must be sufficiently simple to meet the above requirements. At the same time, it should be flexible enough to provide a good description of heterogeneous data sets. The approach presented here appears to satisfy both requirements, thereby representing the desired compromise between identifiability and flexibility of the proposed model. We use a fully parametric model of the natural history of cancer for making maximum likelihood inferences from randomized screening trials. Having estimated parameters of the model from such data, one can use a simulation counterpart of the same model to predict various indicators associated with breast cancer incidence and mortality in the general population under different screening scenarios. In predictive settings, where model parameters are estimated from some other dataset, calibration is necessary to mitigate the effect of overfitting. This is an important step in an attempt to extrapolate the initial parametric inference from a randomized trial to a population-based setting. The proposed model structure is well suited to calibrate the model in a parsimonious and biologically meaningful way. This goal is accomplished through designing a stepwise fitting procedure so that the parameters α0 and σ are chosen to fit the tumor size distribution observed when the dissemination of mammography is believed to be low, while the parameter α1 is estimated to fit the same distribution at the end of the observation period. Relative insensitivity of the size distribution to certain subsets of parameters helps design such a procedure. The calibration procedure thus designed can also be viewed as a method for estimating some parameters of the natural history from data on cancer incidence in the general population. For example, the mean growth rate μ can also be estimated from the incidence size distribution (Step 2 of the proposed algorithm), which may improve the fit shown in Table 4 for t = 1999. Just to make our validation procedure as strict as possible, we intentionally refrained from adjusting the parameter μ. However, estimation of the parameters incorporated into the onset time distribution, like ρ, A, and B, calls for cohort observations. Randomized trials represent the best designed cohort studies, which is why we combine both types of parametric inference in the analysis of cancer incidence. The value of α1 estimated from the SEER data appears to be much higher than its maximum likelihood estimate obtained from the Canadian study. This discrepancy can be attributed to the fact that the CNBSS data include in situ tumors, while the calibrated parameter α1 refers to invasive breast cancer. Yet another possibility is that the NCI model of mammography dissemination may underestimate the actual intensity of screening, so that the model compensates for this bias yielding a higher value of the sensitivity parameter α1. The latter explanation is speculative, of course. The model shows an excellent description of the observed breast cancer incidence and mortality in the U.S. population. The mortality trend is consistent with the assumption that there has been a relatively long history of incremental improvements in breast cancer treatment. There are two reasons why we refrained from using the results of meta-analysis based on the proportional hazards model to explicitly describe the effect of adjuvant chemotherapy on breast cancer mortality. First, we know that the Cox model does not provide a good description of covariate effects on breast cancer survival (9,13,16,44,47). Second, one can see that the age-adjusted mortality curve is flat for 15 years (beginning from 1975), whereas breast cancer incidence continues to increase. This pattern indicates that improvements in breast cancer treatment began manifesting themselves before the start of any appreciable dissemination of mammography in the U.S. population. A similar observation was recently reported for breast cancer mortality in the United Kingdom (18). The contribution of screening to the observed decline in mortality appears to be rather weak under our model (Fig. 6). The main point here is that the actual dissemination of screening in the U.S. population is too low for a tangible survival benefit from mammography due to screening by treatment interactions. It is also quite low in screening trials because of a narrow range of screening ages and just a few scheduled examinations. If the survival benefit of screening in randomized trials were truly strong it would inevitably be seen far beyond the well-known breast cancer screening controversy (48). To appreciate such a benefit, the target population must be subjected to a much more intensive screening. To demonstrate this, we ran the model in a way that mimics annual screening of all women older than 30 over 1975–1999 (special run). In this run, the percent decline in mortality due to screening is expected to be more than 19.8% by 1999 (Fig. 6). Thus, the model indicates a significant benefit of breast cancer screening providing its dissemination is sufficiently intensive. The estimated mean lead time is 2.06 years (all ages) and does not change much in the special run. Fig. 6. View largeDownload slide Predicting breast cancer mortality (Model 2) in the absence or presence of screening. Fig. 6. View largeDownload slide Predicting breast cancer mortality (Model 2) in the absence or presence of screening. However nice a final fit may be, it is not enough for model validation. One needs to evaluate predictive properties of the model under study in a situation where no further calibration is allowed. In some predictive settings (e.g., where missing information is included in the indicator to be predicted; see “Calibration of the Model”) there is no way to obviate the need for an additional calibration, although such situations should be avoided whenever possible. If such a calibration appears to be unavoidable, it should involve as few parameters as possible. We failed to explain the observed increase of breast cancer incidence by mammography dissemination alone. At no reasonable parameter values does the model fit the data after removing the birth cohort effect. This shows that the model is realistic enough to reject unrealistic scenarios. APPENDIX: BASIC NOTATION T - age of an individual at tumor onset; A and B - parameters incorporated into the distribution of T; ρ - ratio of the initiation rate and the rate of proliferation of initiated cells; ρi - parameter ρ for the ith birth cohort; W - time interval between T and the age at tumor detection; U - U = T + W; S - tumor size at detection; λ - rate of exponential tumor growth; ∧57420; - random rate of tumor growth; μ - expected value of 1/∧; σ - standard deviation of 1/∧; α0 - sensitivity parameter (proportionality coefficient in a quantal response model) for clinical detection; α1 - sensitivity parameter (proportionality coefficient in a quantal response model) for mammography + physical exam; α2 - sensitivity parameter (proportionality coefficient in a quantal response model) for physical examination alone; M - postdetection survival time; G(·) - survival time cumulative distribution function; $${\bar{G}}({\cdot})$$ - survival function: Ḡ $$({\cdot})\ {=}\ 1{-}G({\cdot})$$ ; z - vector of covariates; β - vector of regression coefficients; t0 - change point in calendar time; cθ, cη - calibration coefficients. Supported by NIH/NCI grant U01 CA88177. Some analyses reported in the paper were supported by the Utah Population Data Base and the Utah Cancer Registry funded by contract NO1-PC-67000 from the NCI with additional support from the Utah State Department of Health and the University of Utah. We thank Dr. A. D. Tsodikov (University of California–Davis) for his help in obtaining the estimates presented in Table 3 and valuable comments. We thank Drs. K. Cronin, E. Feuer, and A. Mariotto, who generously shared their time, knowledge, and experience in helping us gain a better understanding of many scientific and practical issues related to this research effort. We are also grateful to the reviewers for their open-mindedness and truly helpful comments. References (1) Clayton D, Schifflers E. Models for temporal variation in cancer rates. II: Age-period-cohort models. Statistics in Medicine  1987b; 6: 469–71. Google Scholar (2) Hanin LG, Yakovlev AY. Multivariate distributions of clinical covariates at the time of cancer detection. Stat Methods Med Res  2004; 13: 457–89. Google Scholar (3) Zelen, M. A hypothesis for the natural time history of breast cancer. Cancer Research  1968; 28: 207–16. Google Scholar (4) Feldstein M, Zelen M. Inferring the natural time history of breast cancer: implications for tumor growth rate and early detection. Breast Cancer Res Treat  1984; 4: 3–10. Google Scholar (5) Blumenson LE, Bross ID. A mathematical analysis of the growth and spread of breast cancer. Biometrics  1969; 22: 95–109. Google Scholar (6) Schwartz M. An analysis of the benefits of serial screening for breast cancer based upon a mathematical model of the disease. Cancer  1978; 41: 1550–64. Google Scholar (7) Schwartz M. A mathematical model used to analyse breast cancer screening strategies. Oper Res  1978; 26: 937–55. Google Scholar (8) Baker SG, Erwin D, Kramer BS, Prorok PC. Using observational data to estimate an upper bound on the reduction in cancer mortality due to periodic screening, BMC Med Res Methodol  2003; 3: 4. Available at: http://www.biomedcentral.com/1471-2288/3/4. Google Scholar (9) Yakovlev AY, Tsodikov AD. Stochastic models of tumor latency and their biostatistical applications. Singapore: World Scientific; 1996. Google Scholar (10) Asselain B, Fourquet A, Hoang T, Tsodikov AD, Yakovlev AY. A parametric regression model of tumor recurrence: an application to the analysis of clinical data on breast cancer. Stat Probabil Lett  1996; 29: 271–8. Google Scholar (11) Ibrahim JG, Chen MH, Sinha D. Bayesian survival analysis. New York (NY): Springer; 2001. Google Scholar (12) Ibrahim JG, Chen MH, Sinha D. Bayesian semi-parametric models for survival data with a cure fraction. Biometrics  2001; 57: 383–8. Google Scholar (13) Tsodikov A. Semiparametric models of long- and short-term survival: an application to the analysis of breast cancer survival in Utah by age and stage. Stat Med  2002; 21: 895–920. Google Scholar (14) Tsodikov A. Semiparametric models: a generalized self-consistency approach. J R Stat Soc Ser B  2003; 65: 759–74. Google Scholar (15) Tsodikov AD, Ibrahim JG, Yakovlev AY. Estimating cure rates from survival data: an alternative to two-component mixture models. JASA  2003; 98: 1063–78. Google Scholar (16) Yakovlev AY, Tsodikov AD, Boucher K, Kerber R. The shape of the hazard function in breast carcinoma: curability of the disease revisited. Cancer  1999; 85; 1789–98. Google Scholar (17) Zaider M, Zelefsky MJ, Hanin LG, Tsodikov AD, Yakovlev AY, Leibel SA. A survival model for fractionated radiotherapy with an application to prostate cancer. Phys Med Biol  2001; 46: 2745–58. Google Scholar (18) Kobayashi S. What caused the decline in breast cancer mortality in the United Kingdom? Breast Cancer  2004; 11: 156–9. Google Scholar (19) Albert A, Gertman PM, Louis TA, Liu SI. Screening for the early detection of cancer. 2. The impact of the screening on the natural history of the disease. Math Biosci  1978; 40: 61–109. Google Scholar (20) Bartoszyński R, Edler L, Hanin L, Kopp-Schneider A, Pavlova L, Tsodikov A, Zorin, A, Yakovlev A. Modeling cancer detection: tumor size as a source of information on unobservable stages of carcinogenesis. Math Biosci  2001; 171: 113–42. Google Scholar (21) Moolgavkar SH, Venzon DJ. Two event model for carcinogenesis: Incidence curves for childhood and adult tumors. Math Biosci  1979; 47: 55–77. Google Scholar (22) Moolgavkar SH, Knudson AG. Mutation and cancer: a model for human carcinogenesis. J Natl Cancer Inst  1981; 66: 1037–52. Google Scholar (23) Moolgavkar SH, Luebeck EG. Two-event model for carcinogenesis: Biological, mathematical and statistical considerations. Risk Anal  1990; 10: 323–41. Google Scholar (24) Heidenreich WF. On the parameters of the clonal expansion model. Radiat Environ Biophys  1996; 35: 127–9. Google Scholar (25) Hanin LG, Yakovlev AY. A nonidentifiability aspect of the two-stage model of carcinogenesis. Risk Anal  1996;16: 5: 711–5. Google Scholar (26) Heidenreich WF, Luebeck EG, Moolgavkar SH. Some properties of the hazard function of the two-mutation clonal expansion model. Risk Anal  1997; 17: 391–9. Google Scholar (27) Gregori G, Hanin L, Luebeck G, Moolgavkar S, Yakovlev A. Testing goodness of fit with stochastic models of carcinogenesis. Math Biosci  2001; 175: 13–29. Google Scholar (28) Hanin L. Identification problem for stochastic models with application to carcinogenesis, cancer detection and radiation biology. Discrete Dynamics Nat Soc  2002; 7: 177–89. Google Scholar (29) Zorin AV, Edler L, Hanin LG, Yakovlev AY. Estimating the natural history of breast cancer from bivariate data on age and tumor size at diagnosis. In: Quantitative Methods for Cancer and Human Health Risk Assessment, L. Edler and C.P. Kitsos, editors. New York (NY): Wiley; 2005. pp. 317–27. Google Scholar (30) Hanin LG, Yakovlev AY. Identifiability of the joint distribution of age and tumor size at detection in the presence of screening, Math Biosci. 2004, submitted. Google Scholar (31) Mandelblatt J, Saha S, Teutsch S, Hoerger T, Siu AL, Atkins D, et al. A systematic review: the cost-effectiveness of screening mammography beyond age 65. Ann Intern Med  2003; 139: 835–42. Google Scholar (32) Hanin LG, Tsodikov AD, Yakovlev AY. Optimal schedules of cancer surveillance and tumor size at Detection. Math Comput Model  2001; 33: 1419–30. Google Scholar (33) Klein JP, Moeschberger ML. Survival analysis: techniques for censored and truncated data. Springer Series in Statistics for Biology and Health. New York (NY): Springer; 1997. Google Scholar (34) Miller AB, Howe GR, Wall C. The national study of breast cancer screening. Clin Invest Med  1981; 4: 227–58. Google Scholar (35) Miller AB, Baines CJ, To T, Wall C. Canadian national breast screening study: 1. Breast cancer detection and death rates among women age 40–49 years. Can Med Assoc J  1992a; 147: 1459–76 (published erratum in Can Med Assoc J 1993;148:718). Google Scholar (36) Miller AB, To T, Baines CJ, Wall C. The Canadian National Breast Screening Study—1. A randomized screening trial of mammography in women age 40–49: Breast cancer mortality after 11–16 years of follow-up. Ann Intern Med  2002; 137:5: 305–12. Google Scholar (37) Miller AB, Baines CJ, To T, Wall C. Canadian National Breast Screening Study 2. Breast cancer detection and death rates among women aged 50 to 59 years. Can Med Assoc J  1992b; 147: 1477–88 (published erratum in Can Med Assoc J 1993;148:718). Google Scholar (38) Miller AB, To T, Baines, CJ, Wall C. Canadian National Breast Screening Study 2: 13-year results of a randomized trial in women age 50–59 years. J Natl Cancer Inst  2002; 92: 1490–9. Google Scholar (39) Pflug, G. C. Optimization of stochastic models: the interface between simulation and optimization. Boston (MA): Kluwer Academic Publishers; 1996. Google Scholar (40) Boucher KM, Kerber RA. The shape of the hazard function for cancer incidence. Math Comput Model  2001; 33: 1361–76. Google Scholar (41) Clayton D, Schifflers E. Models for temporal variation in cancer rates. I: Age-period and age-cohort models. Stat Med  1987; 6: 449–67. Google Scholar (42) Wun LM, Feuer EJ, Miller BA. Are increases in mammographic screening still a valid explanation for trends in breast cancer incidence in the United States? Cancer Causes Control  1995; 6: 135–44. Google Scholar (43) Tarone RE, Chu KC. Age-period-cohort analyses of breast-, ovarian-, endometrial- and cervical-cancer mortality rates for Caucasian women in the USA. J Epidemiol Biostat  2000; 5: 221–31. Google Scholar (44) Pocock SJ, Gore SM, Kerr GR. Long-term survival analysis: the curability of breast cancer. Stat Med  1982; 1: 93–104. Google Scholar (45) Greenwood PE, Nikulin MS. A guide to chi-squared testing, New York (NY): Wiley Interscience; 1996. Google Scholar (46) Verweij PJM, Van Houwelingen HC. Cross-validation in survival analysis. Stat Med  1993; 12: 2305–14. Google Scholar (47) Boucher K, Asselain B, Tsodikov AD, Yakovlev AY. Semiparametric versus parametric regression analysis based on the bounded cumulative hazard model: an application to breast cancer recurrence. In: Nikulin M, Balakrishnan N, Mesbah M, Limnious N, eds., Semiparametric models and applications to reliability, survival analysis and quality of life. Birhäuser; 2004. pp. 399–418. Google Scholar (48) Olsen O, Gotzsche PC. Cochrane review on screening for breast cancer with mammography. Lancet  2001; 358: 1340–2. Google Scholar © The Author 2006. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oxfordjournals.org. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JNCI Monographs Oxford University Press # Chapter 10: The University of Rochester Model of Breast Cancer Detection and Survival , Volume 2006 (36) – Oct 1, 2006 13 pages /lp/oxford-university-press/chapter-10-the-university-of-rochester-model-of-breast-cancer-KuGXbAsdnB Publisher Oxford University Press ISSN 1052-6773 eISSN 1745-6614 DOI 10.1093/jncimonographs/lgj010 pmid 17032896 Publisher site See Article on Publisher Site ### Abstract Abstract This paper presents a biologically motivated model of breast cancer development and detection allowing for arbitrary screening schedules and the effects of clinical covariates recorded at the time of diagnosis on posttreatment survival. Biologically meaningful parameters of the model are estimated by the method of maximum likelihood from the data on age and tumor size at detection that resulted from two randomized trials known as the Canadian National Breast Screening Studies. When properly calibrated, the model provides a good description of the U.S. national trends in breast cancer incidence and mortality. The model was validated by predicting some quantitative characteristics obtained from the Surveillance, Epidemiology, and End Results data. In particular, the model provides an excellent prediction of the size-specific age-adjusted incidence of invasive breast cancer as a function of calendar time for 1975–1999. Predictive properties of the model are also illustrated with an application to the dynamics of age-specific incidence and stage-specific age-adjusted incidence over 1975–1999. The purpose of the modeling effort described herein is twofold: Designing explanatory and predictive tools for quantitative description of the effects of breast cancer screening for various screening strategies, including the national trends in breast cancer incidence and mortality under the base case scenario Developing methods for statistical inference on the natural history of breast cancer in terms of biologically meaningful parameters In what follows, we use the term “prediction” to mean extrapolation of the basic epidemiological descriptors from one setting to another, including new interventions and risk factors, but not the problem of forecasting future population trends. The latter sort of model-based predictions would require sufficient knowledge of future changes in all components of the natural history of the disease, including future changes of cancer risk over time. See (1) for the discussion of this problem in regard to the age–period–cohort model. The traditional approach to mathematical or simulation modeling of cancer screening tends to describe the process of tumor development in only one dimension, that is, the time natural history. A broader methodological idea is to construct a stochastic model of cancer development and detection that yields the multivariate distribution of observable variables at the time of diagnosis (2). By focusing on such multivariate observations, rather than just on the age of patients at diagnosis, this idea seeks to invoke an additional source of information (available only at the time of detection) to improve estimation of unobservable parameters of cancer latency. Indeed, the process of tumor progression manifests itself as certain changes in many characteristics of the tumor. Therefore, this process is multidimensional in nature, and modeling tumor progression as a (linear) sequence of stages represents a poor approximation to a more general multivariate model of the natural history of cancer. The idea of multiple pathways of cancer progression was introduced in the path-breaking works by (3) and Feldstein and Zelen (4). In this paper, we base our inference on the natural history of breast cancer on two important variables, namely, the tumor size and the age of a patient at diagnosis, using mechanistic models of tumor development and detection to derive an analytic expression of the joint distribution of the said variables. In doing so, we take advantage of a mechanistic two-stage model of carcinogenesis to describe the “disease-free” stage of breast cancer development and the so-called quantal response model to relate a chance of detecting a tumor to its size; the latter mechanism applies equally to both incident and screen-detected (prevalent) cases. Some authors (5–7) have long realized the advantages of multivariate analysis in screening studies, but specific modeling and inferential techniques require a much higher level of sophistication than that in the earlier attempts at a comprehensive theory of cancer screening. The proposed model of the natural history of breast cancer has the following advantages: It is based on a minimal set of biologically plausible assumptions. It is proven to be completely identifiable. It is formulated in terms of probabilistic characteristics that can be estimated in the presence of data censoring, thereby requiring no demographic information for their evaluation. When applied to the data generated by randomized screening trials, the model allows estimation of all parameters by the method of maximum likelihood, whereas a subset of its parameters responsible for the progression (preclinical) stage of tumor development can independently be estimated from the population-based data available from the Surveillance, Epidemiology and End Results (SEER) National Cancer Institute (NCI) program. All parameters are estimated from epidemiological data, using the same model that makes them by far more reliable than those available from the literature, because the latter estimates have been obtained under dramatically dissimilar models and assumptions. When extrapolations are made to a different dataset, minimal calibration is needed, involving only those parameters that are likely (on biological grounds) to vary between the two sets of data. The predictive power of the model has been evaluated in several applications under strict conditions allowing no further calibration of any of its parameters already estimated (calibrated) in a different setting. The model has been built in part on the base case inputs shown in Table 1. Table 1.  Base case parameter usage* Parameter  Usage  Base case treatment dissemination  Not needed  Base case mammography dissemination  Used in provided form  Base case other-cause mortality  Not needed  Base case age-specific breast cancer incidence  Used for validation  Base case age-adjusted breast cancer incidence  Some values were used for calibration  Base case 1975 breast cancer prevalence  Not needed  Base case 1975 cause specific survival  Not needed  Base case historical survival  Not needed  Base case 1975 breast cancer mortality  Used for calibration  Base case breast cancer APC incidence  Uses a processed version of the standard parameter (relative risks)  Base case treatment effect  Not needed  Base case SEER 9 mortality  Used for calibration  Parameter  Usage  Base case treatment dissemination  Not needed  Base case mammography dissemination  Used in provided form  Base case other-cause mortality  Not needed  Base case age-specific breast cancer incidence  Used for validation  Base case age-adjusted breast cancer incidence  Some values were used for calibration  Base case 1975 breast cancer prevalence  Not needed  Base case 1975 cause specific survival  Not needed  Base case historical survival  Not needed  Base case 1975 breast cancer mortality  Used for calibration  Base case breast cancer APC incidence  Uses a processed version of the standard parameter (relative risks)  Base case treatment effect  Not needed  Base case SEER 9 mortality  Used for calibration  * APC = age–period–cohort; SEER = Surveillance, Epidemiology, and End Results. View Large The natural history model allows us to estimate the effects of screening on the age-specific cancer incidence and the distribution of major covariates at the time of diagnosis. This inference is entirely independent of the data on cancer mortality. To model the effect of screening on cancer-specific mortality, one needs to establish a quantitative relationship between clinical covariates (e.g., age, stage, tumor size) and postdetection survival of patients with breast cancer. Regression survival models are designed to estimate the survival time distribution conditional on covariate information, whereas the joint (multivariate) distribution of covariates at the time of diagnosis provides a link between the natural history of breast cancer and cancer-specific survival. The periodic screening evaluation methodology (8), although elegant, does not represent a strong alternative to flexible natural history models because of its many limitations—among which is the assumption that any effect of birth cohort is negligible. Our analysis of the Utah Population Database have shown that breast cancer risk varies substantially between birth cohorts separated by time intervals as short as 5 years. We resorted to a new class of extended hazard regression models with cure that has been extensively studied in recent years (9–15), to name a few. Even the simplest model from this class provides an excellent description of the effects of clinical covariates on cancer survival (13,16,17). In combination with the natural history component, this model was used in our studies to describe the U.S. national trend in breast cancer mortality from 1975 to 1999. The observed mortality trend is consistent with the assumption that there has been a relatively long history of incremental improvements in breast cancer treatment. The conjecture is corroborated by a recent analysis of breast cancer mortality in the United Kingdom (18) indicating that the mortality rate began to decrease before the start of mammographic screening. The model has proven to be adequate for the complex phenomena that so far have been explored in relation to cancer incidence and mortality. This statement, however, does not mean that our model is either perfect or universal; it may call for further modifications if future applications so require. We believe that improved models of cancer screening can be developed in the future through including more components (in addition to age and tumor size) in the vector of clinical covariates accessible to measurement at the time of diagnosis. See (2) for the general idea and associated analytical techniques. MODELING THE NATURAL HISTORY OF BREAST CANCER Our approach attempts to implement the following concept formulated by Albert et al. (19): More realistic models for tumor detectability can be synthesized by first modeling the behavior of tumor growth over time and superimposing a model for detection probability as a function of tumor size. This idea was set forth in the so-called quantal response model of tumor detection developed by Bartoszyński and other authors in several publications [see (20) for details and references]. Below we outline the most basic features of the class of quantal response models of cancer detection combined with mechanistically motivated models of carcinogenesis. A Stochastic Model of Tumor Latency The latent period of tumor development can be broken down into three stages: formation of initiated cells, promotion of initiated cells resulting in the first malignant clonogenic cell, subsequent tumor growth and progression until the event of detection occurs. Let T be the age of an individual at tumor onset, and W the time of spontaneous tumor detection (in the absence of screening) measured from the onset of disease. We use a two-stage stochastic model of carcinogenesis proposed by Moolgavkar et al. (21–23) to specify the probability density function, pT (t), of the random variable T. The model is given by the following survival function  ${\bar{F}}_{T}(t):{=}{{\int}_{t}^{{\infty}}}p_{T}(u)du{=}\left[\frac{(A{+}B)e^{At}}{B{+}Ae^{(A{+}B)t}}\right]^{\mathrm{{\rho}}},{\ }t{\geq}0,$ [10.1]from which the density pT (t) can be derived. Here A, B, ρ > 0 are identifiable parameters of the model (24–26). Formula [10.1] specifies the distribution of the duration of the first two stages of carcinogenesis. The process of initiation is usually modeled as a Poisson process. Here the parameter ρ is the ratio of the initiation rate and the rate of proliferation of initiated cells, whereas A and B are parameters of the promotion time distribution. This model has proven to provide a good fit to diverse data on animal and human carcinogenesis [see (27) for goodness-of-fit testing]. Introduce a random variable S to represent tumor size (the number of cells in a tumor) at spontaneous detection. Suppose that the law of tumor growth is described by a deterministic function $$\mathit{f}:[0,{\infty}){\rightarrow}[1,{\infty})$$ with f (0) = 1, so that S = f (W). It is assumed also that the random variables T and W are absolutely continuous and stochastically independent; the function f is differentiable and f ′ > 0; and the hazard rate for spontaneous tumor detection is proportional to the current tumor size with coefficient α > 0. It follows from the above assumptions that  $p_{W}(w){=}\mathrm{{\alpha}}f(w)e^{{-}\mathrm{{\alpha}}{{\int}_{0}^{w}}f(u)du},{\ }w{\geq}0.$ Therefore,  $p_{S}(s){=}\mathrm{{\alpha}}sg{^\prime}(s)e^{{-}\mathrm{{\alpha}}{{\int}_{0}^{g(s)}}f(u)du},{\ }s{\geq}1,$ where g stands for the inverse function for f : g = f−1. For deterministic exponential tumor growth with rate λ > 0 (f (w) = eλw), we have  \begin{eqnarray*}&&p_{S}(s){=}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}e^{{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(s{-}1)},{\ }p_{W}(w){=}\mathrm{{\alpha}}e^{\mathrm{{\lambda}}w{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}w}{-}1)},\\&&s{\geq}1,{\,}w{\geq}0.\end{eqnarray*} [10.2] Here tumor size at detection S follows a translated exponential distribution with parameter α/λ, whereas the distribution of age at detection measured from the disease onset is a Gompertz distribution. The random variable W has the same meaning as the preclinical stage duration within the traditional approach to modeling the natural history of cancer. Consider the random vector Y: = (T + W, S), which components are interpreted as age and tumor size at diagnosis, respectively. The probability density function of Y is given by  $p_{Y}(u,s){=}p_{T}(u{-}g(s))p_{S}(s),{\ }u{\geq}g(s),s{\geq}1.$ This distribution is identifiable (28). Remark 1. A distribution P(x;θ), where θ is a vector of parameters, is said to be identifiable if from the equality  $P(x;\mathrm{{\theta}}_{1}){=}P(x;\mathrm{{\theta}}_{2})$ valid for all x it follows that θ1 = θ2. For exponential tumor growth, we obtain  $p_{Y}(u,s){=}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}e^{{-}\frac{\mathrm{{\alpha}}}{\mathrm{{\lambda}}}(s{-}1)}p_{T}(u{-}\frac{\mathrm{ln}s}{\mathrm{{\lambda}}}),{\ }u{\geq}0,{\,}1{\leq}s{\leq}e^{\mathrm{{\lambda}}u}.$ In practice, it is not the number of tumor cells S that is observable but the volume, V, in appropriate units, and one needs to change variables using the equality S = V/γ, where γ is the volume of one tumor cell $$({\gamma}{\simeq}10^{{-}9}cm^{3}).$$ The parameter γ has only a scaling effect on the distribution of tumor volume. The most straightforward generalization of the model can be accomplished by assuming that some of its parameters are random. In particular, suppose that 1/λ is gamma distributed with parameters a and b. Randomness of λ is reflective of the individual variability of tumor growth rate. Then we have  \begin{eqnarray*}&&p_{Y}(u,s){=}\frac{\mathrm{{\alpha}}b^{a}}{(\mathrm{ln}s)^{a{+}1}{\Gamma}(a)}{{\int}_{0}^{u}}(u{-}x)^{a}\\&&{\times}\mathrm{exp}{\{}{-}\frac{b{+}\mathrm{{\alpha}}(s{-}1)}{\mathrm{ln}s}(u{-}x){\}}p_{T}(x)dx,\end{eqnarray*} [10.3]for u ≥ 0, s ≥ 1. Here the marginal distribution of tumor size is a Pareto distribution (20). As shown by our analysis of the Utah Population Data Set, the distribution [10.3] provides a good fit to cohort data on breast cancer (29). Modeling Impact of Screening on Natural History of Breast Cancer Let 0 < τ1 < τ2 < … < τn be a given screening schedule. It is convenient to set τ0: = 0 and τn+1: = ∞. Let W0 be the time of spontaneous detection (incident and interval cases) and W1 the time of screening-based detection, both times being measured from the disease onset. Then for the time W of combined detection we have W = min(W0, W1). It is natural to assume that the random variables W1 and W0 are conditionally independent given the onset time T. This assumption is plausible for deterministic tumor growth if we hypothesize also that W0 and T are independent and that the hazard rate for the distribution of W0 as well as the discrete hazard rate of tumor detection at a medical exam (provided the previous exams did not detect the tumor) given T are both proportional (with different coefficients of proportionality α0 and α1, respectively) to the current tumor size. Then we have for u ≥ 0, s ≥ 1:  \begin{eqnarray*}&&F_{Y}(u,s){=}\mathrm{Pr}(T{+}W{\leq}u,f(W){\leq}s)\\&&{=}{{\int}_{0}^{u}}F_{W{\vert}T{=}t}(\mathrm{min}{\{}u{-}t,g(s){\}})f_{T}(t)dt,\end{eqnarray*} [10.4]where  $F_{W{\vert}T{=}t}(w){=}1{-}e^{{-}[\mathrm{{\alpha}}_{0}{\Phi}(w){+}\mathrm{{\alpha}}_{1}{\sum}_{k{=}i{+}1}^{j}f(\mathrm{{\tau}}_{k}{-}t)]}.$ It can be proven that under mild conditions the distribution of the random vector Y is identifiable (30). Unfortunately, no identifiability results are available for its randomized versions because of prohibitive complexity of the analytic expression for this distribution even for exponential tumor growth with random growth rate. The same is true for the joint distribution given by formula [10.3]. However, our simulation experiments have shown that identifiability of the model is preserved if the compounding procedure uses the gamma distribution for λ or its reciprocal. Remark 2. It is tempting to generalize the model by making the growth rate (or the preclinical stage duration) dependent on the age of a patient at the time of tumor onset. There have been attempts to incorporate this element into models of the natural history of cancer. However, the fact that no tangible birth cohort effect on the nonparametrically estimated distribution of tumor size at detection is seen in a cohort study (29) is consistent with stochastic independence of tumor growth rate of the age at tumor onset. The same argument applies equally to the sensitivity parameter α1. There are indications in the literature (31) that the sensitivity of mammography increases with age. However, the tendency does not appear to be strong enough to manifest itself in the type of data we deal with in this paper. Next we need a formula for the probability of detection at a given screen. Let τi ≤ t < τi+1, 0 ≤ i ≤ n. For 0 ≤ i ≤ n − 1 and i + 1 ≤ k ≤ n, define the probability pt(k): = Pr(W1 = τk − t|T = t) of tumor detection at the kth screen given the cancer onset at moment t, and by $$p_{t}({\infty}){=}1{-}{\sum}^{n}_{k{=}i{+}1}p_{t}(k)$$ the corresponding conditional probability that tumor is not detected by screening. Introduce a discrete analogue of the conditional (given T = t) hazard rate for the screening based detection as  $\mathrm{{\mu}}_{t}{=}{{\sum}_{k{=}i{+}1}^{n}}r_{t}(k)\mathrm{{\delta}}_{\mathrm{{\tau}}_{k}{-}t},$ where δx stands for the Dirac measure at x, and the sum over the empty set of indices is set, as usual, to be zero. Then the following formula holds (32):  $p_{t}(k){=}e^{{-}{\sum}_{j{=}i{+}1}^{k{-}1}r_{t}(j)}[1{-}e^{{-}r_{t}(k)}],{\ }i{+}1{\leq}k{\leq}n.$ Observe that this holds true for all k = 1, …, n, if we set pt(k) = rt(k) = 0 for 1 ≤ k ≤ i. Assuming also that the conditional discrete rate of screening based detection is proportional to the current tumor size: rt(k) = α1S(τk − t), i + 1 ≤ k ≤ n, α1 > 0, we have  $p_{t}(k){=}e^{{-}{\sum}_{j{=}i{+}1}^{k{-}1}\mathrm{{\alpha}}_{1}S(\mathrm{{\tau}}_{j}{-}t)}[1{-}e^{{-}\mathrm{{\alpha}}_{1}S(\mathrm{{\tau}}_{k}{-}t)}],{\ }i{+}1{\leq}k{\leq}n.$ [10.5] Estimation of Model Parameters When considering a study design typical for randomized screening trials it is possible to derive the likelihood function on the basis of the observations of tumor size and age at diagnosis. Let τ = {τ1 < τ2 < … < τn} be a sequence of screening ages. We proceed from the model assumptions introduced earlier with an arbitrary probability density function pL(x) for the rate, λ, of tumor growth. The model is formulated in terms of tumor size and age at detection denoted by S and U, respectively. The contributions to the likelihood function are given by the following formulas (30):Here F̄T is the survival function of the onset time, and τk ≤ c < τk+1, 0 ≤ k ≤ n. Since age at enrollment varies among study subjects, the above formulas must be modified in the usual way in order to incorporate left random truncation (33). For interval and incident cases, the contribution of an observation (u, s) is equal to the value at (u, s) of the joint p.d.f. of age and tumor size at spontaneous (clinical) detection:  \begin{eqnarray*}&&p_{1}(u,s){=}\mathrm{{\alpha}}_{0}{{\int}_{0}^{u{-}\mathrm{{\tau}}_{k}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}x}{\mathrm{ln}s}(s{-}1)]{\}}p_{T}(u{-}x)p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\\&&{\times}\frac{dx}{x}{+}\mathrm{{\alpha}}_{0}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{u{-}\mathrm{{\tau}}_{i{+}1}}^{u{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}[\frac{\mathrm{{\alpha}}_{0}x}{\mathrm{ln}s}(s{-}1){+}\mathrm{{\alpha}}_{1}s\\&&{\times}{{\sum}_{j{=}i{+}1}^{k}}e^{\frac{\mathrm{ln}s}{x}(\mathrm{{\tau}}_{j}{-}u)}]{\}}p_{T}(u{-}x)p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\frac{dx}{x},\end{eqnarray*} [10.6]where τk ≤ u < τk+1, 0 ≤ k ≤ n. For screen-detected cases, the contribution of an observation (τk, s), 1 ≤ k ≤ n, equals the value of the p.d.f. of tumor size at detection on the kth screen:  \begin{eqnarray*}&&p_{2}(\mathrm{{\tau}}_{k},s){=}\frac{1{-}e^{{-}\mathrm{{\alpha}}_{1}s}}{s}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{i{+}1}}^{\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}(s{-}1)x}{\mathrm{ln}s}{\}}\\&&{\times}\mathrm{exp}{\{}{-}\mathrm{{\alpha}}_{1}s{{\sum}_{j{=}i{+}1}^{k{-}1}}e^{{-}\frac{\mathrm{ln}s}{x}(\mathrm{{\tau}}_{k}{-}\mathrm{{\tau}}_{j})}{\}}p_{T}(\mathrm{{\tau}}_{k}{-}x)\\&&{\times}p_{{\wedge}}(\frac{\mathrm{ln}s}{x})\frac{dx}{x},{\ }1{\leq}k{\leq}n.\end{eqnarray*} [10.7] For censored observations, the contribution of an observation c equals $${\bar{F}}_{U}(c),$$ where U is the age at combined tumor detection:  \begin{eqnarray*}&&p_{3}(c){=}{\bar{F}}_{T}(c){+}{{\int}_{0}^{{\infty}}}{{\int}_{0}^{c{-}\mathrm{{\tau}}_{k}}}\mathrm{exp}{\{}{-}\frac{\mathrm{{\alpha}}_{0}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}x}{-}1){\}}\\&&{\times}p_{T}(c{-}x)p_{{\wedge}}(\mathrm{{\lambda}})dxd\mathrm{{\lambda}}\\&&{+}{{\sum}_{i{=}0}^{k{-}1}}{{\int}_{0}^{{\infty}}}{{\int}_{c{-}\mathrm{{\tau}}_{i{+}1}}^{c{-}\mathrm{{\tau}}_{i}}}\mathrm{exp}{\{}{-}[\frac{\mathrm{{\alpha}}_{0}}{\mathrm{{\lambda}}}(e^{\mathrm{{\lambda}}x}{-}1)\\&&{+}\mathrm{{\alpha}}_{1}e^{{-}\mathrm{{\lambda}}(c{-}x)}{{\sum}_{j{=}i{+}1}^{k}}e^{\mathrm{{\lambda}}\mathrm{{\tau}}_{j}}]{\}}p_{T}(c{-}x)p_{{\wedge}}(\mathrm{{\lambda}})dxd\mathrm{{\lambda}},\end{eqnarray*} [10.8] We estimated all parameters incorporated into the model from individual data generated by the Canadian National Breast Screening Studies (CNBSS). This study consists of two screening trials individually randomized and conducted during 1980–96 and monitored to 1996 in 15 centers in Canada. Both trials were coordinated at the University of Toronto by a study team directed by one of the coauthors of this paper (Miller). The first study (CNBSS-1) included 50 430 women aged 40–49 years on study entry and evaluated the efficacy of annual mammography, breast physical examination, and breast self-examination instruction (BSE) in reducing breast cancer mortality (34). In the mammography plus physical examination group, 62% of women received five annual screens, including two-view mammography, physical examination, and BSE. The remaining women, recruited later, received four annual screens. The usual care group were not recalled for rescreening after their first visit, when they had a breast physical examination plus BSE, but were mailed annual questionnaires. The second study (CNBSS-2) included 39 405 women age 50–59 years on study entry and evaluated the contribution of annual mammography over and above annual physical examination of the breasts plus BSE in the reduction of mortality from breast cancer (34). Women were randomized to receive annual mammography and physical examination plus BSE or annual physical examination plus BSE only for a total of five or four screens. For both trials, center coordinators conducted the randomization using allocation lists prepared by the central office. Randomization was independent of physical examination findings. The center coordinators collected surgery and pathology reports for all breast diagnostic and therapeutic procedures. CNBSS pathologists reviewed all slides. If the community and CNBSS pathologist disagreed, a panel of three to five CNBSS pathologists conducted a blind and independent review. Extensive quality-control procedures were carried out while data collection was in progress. After the screening centers closed in 1988, all women known to have breast cancer were followed up annually by the CNBSS central office until June 30, 1996. Passive follow-up of all participants through linkage with the National Cancer Registry identified new diagnoses of breast cancer in study participants to December 31, 1993. The central office collected pathology reports for postscreen breast cancers. For these, the community diagnosis was accepted for study purposes. Deaths that occurred before a participant's screening schedule was completed were identified by family members in response to the annual mailed questionnaire. Attending physicians, who received annual requests for information on women with breast cancer, reported deaths to June 30, 1996. Linkage with the CMDB (including deaths in Canadians resident in the United States at time of death) identified causes of death in the entire cohort to December 31, 1993. Independent reviewers, blind as to allocation, reviewed clinical records and classified the underlying cause of death. In CNBSS-1, a total of 592 invasive and 71 in situ breast cancers were diagnosed by December 31, 1993, in the mammography plus physical examination group, compared with 552 and 29, respectively, in the usual care group. Of these, 208 and 58, respectively, were screen detected. At 7 years, there were 38 breast cancer deaths in the mammography plus physical examination group and 28 in the usual care group, for a rate ratio of 1.36 (35). At the 11- to 16 (average 13)-year follow-up, there were 105 and 108 breast cancer deaths, respectively, for a rate ratio, adjusted for mammograms performed outside the CNBSS, of 1.06 (36). In CNBSS-2, a total of 622 invasive and 71 in situ breast carcinomas were ascertained in the mammography plus physical examination group, and 610 and 16 in the physical examination only group. Of these, 267 and 148, respectively, were screen detected. At 7 years there were 38 and 39 deaths from breast cancer in the respective groups for a rate ratio of 0.97 (37). At 11–16 years there were 107 and 105 deaths from breast cancer in the respective groups for a rate ratio of 1.02 (38). Information on tumor size and clinical stage at diagnosis is available in both datasets. Maximization of the likelihood given by [10.6]–[10.8] is a challenging problem because it involves many time-consuming computations. This statement is especially true for many (tens of thousands) double integrals [10.8] representing the contributions of censored observations, for censoring is heavy in this kind of study. Therefore, we resorted to simulations to estimate the contributions of censored data rather than evaluate these integrals numerically. The simulation model described in the next section was used for this purpose. The survival function F̄U (for censored observations) and the p.d.f. pU (for missing tumor size) were estimated nonparametrically from the simulated data. There is always a certain level of random noise in the simulated likelihood, calling for stochastic approximation methods to find a maximum of its expected value. Therefore, we used the Kiefer–Wolfowitz procedure (39) to obtain maximum likelihood estimates. Unfortunately, when applied to the log-likelihood function the Kiefer–Wolfowitz procedure may result in biased estimates and one needs to generate extremely large simulation samples to keep this bias to a minimum. For this reason, we provided 105 simulated samples when estimating F̄U(ui) and 5 × 105 samples when estimating pU(ui) per each iteration of the Kiefer–Wolfowitz procedure in the likelihood inference from the Canadian screening trials. In a separate set of simulation experiments, we assured ourselves that this sample size was sufficient for obtaining stable results. The contributions of exact observations were computed numerically in accordance with formulas [10.6] or [10.7]. In our analysis of the CNBSS data, we assumed that the reciprocal of Λ in formulas [10.6]–[10.8] is gamma distributed with mean μ and standard deviation σ. Since there are three modes of breast cancer detection in the trial, we extended the likelihood function to incorporate three sensitivity parameters: α0, α1, and α2 for spontaneous (clinical) detection, mammography combined with physical exam, and physical exam only, respectively. From age at enrollment one can pick out four distinct birth cohorts in the CNBSS data. Each of the four cohorts is composed of women born within the following 5-year intervals: 1921–25, 1926–30, 1931–35, and 1936–40, respectively. If we allow the parameter ρ to vary among the different birth cohorts, there will be four parameters ρ1, ρ2, ρ3, ρ4 forming the proportional hazards structure of the onset time distribution. We made these parameters responsible for the birth cohort effect as suggested by Boucher and Kerber (40). The remaining two parameters A and B (see formula [10.1]), along with the parameters μ, σ, α0, α1, and α2, are assumed to be common to all birth cohorts. Therefore, there are 11 parameters in total to be estimated from the CNBSS data by the method of maximum likelihood. Our procedure resulted in the following maximum likelihood estimates (MLE) of ρ1, ρ2, ρ3, ρ4: $$\mathrm{{\hat{{\rho}}}}_{1}{=}0.059,{\,}\mathrm{{\hat{{\rho}}}}_{2}{=}0.063,{\,}\mathrm{{\hat{{\rho}}}}_{3}{=}0.084,{\,}\mathrm{{\hat{{\rho}}}}_{4}{=}0.09.$$ These estimates indicate that breast cancer risk tends to increase over the time range covered by the birth cohorts under study. The MLEs of the parameters A, B, μ, σ and their 95% confidence intervals are given in Table 2. The construction of approximate confidence intervals is based on asymptotic normality of maximum likelihood estimators. The MLEs of α0, α1, and α2 are equal to 7.31 × 10−10, 4.82 × 10−9, and 1.34 × 10−9, respectively, with the corresponding confidence intervals: (6.91 × 10−10 to 7.72 × 10−10), (4.45 × 10−9 to 5.18 × 10−9), and (1.24 × 10−9 to 1.44 × 10−9). There is a more than threefold difference in the sensitivity parameters associated with mammography combined with physical exam (α1) and with physical exam alone (α2), respectively. Table 2.  Maximum likelihood estimates of model parameters with asymptotic 95% confidence intervals $${\hat{A}}$$   $${\hat{B}}$$   $$\mathrm{{\hat{{\mu}}}}$$   $$\mathrm{{\hat{{\sigma}}}}$$   1.112 × 10−4  0.1203  0.526  0.531  (1.066 × 10−4, 1.158 × 10−4)  (0.1192, 0.1214)  (0.514, 0.537)  (0.489, 0.574)  $${\hat{A}}$$   $${\hat{B}}$$   $$\mathrm{{\hat{{\mu}}}}$$   $$\mathrm{{\hat{{\sigma}}}}$$   1.112 × 10−4  0.1203  0.526  0.531  (1.066 × 10−4, 1.158 × 10−4)  (0.1192, 0.1214)  (0.514, 0.537)  (0.489, 0.574)  View Large Randomized screening trials are especially well suited for estimation of model parameters in a statistically sound way, because such studies generate individual data on age and tumor size at detection. However, it is the objective of our study to provide a means of explanatory and predictive inference at the population level. We use the CNBSS data to estimate parameters associated with the four birth cohorts identified through the Canadian studies, keeping in mind that some of these parameters may still be adjusted when an additional calibration of the model is necessary (see “Model Validation”). The CNBSS do not provide any information on women born before 1921 and after 1940, so that the birth cohort effect cannot be estimated in terms of the parameter ρ from this data set beyond 1921–40. To surmount this difficulty, the parameters ρ for other birth cohorts were calculated by using the rate ratios that resulted from the age–cohort model (1,41–43). In doing so, we define ρ1 associated with the 1921–25 cohort as a new baseline parameter while retaining the ratios ρ2/ρ1, ρ3/ρ1, and ρ4/ρ1 suggested by the analysis of the CNBSS data. The corresponding ratios for other birth cohorts are given by the analysis based on the age–cohort model. All birth cohorts were grouped in 5-year intervals and the estimated rate ratios were applied to the midpoints of these intervals. The estimated values of the parameter ρ for the various birth cohorts are shown in Fig. 1. Fig. 1. View largeDownload slide Estimated values of the parameter ρ as a function of birth cohort. Fig. 1. View largeDownload slide Estimated values of the parameter ρ as a function of birth cohort. Remark 3. By no means can the age–period–cohort model replace or be superior to mechanistically motivated models, even when modeling cancer incidence in the absence of screening, because its structure is rather rigid, being completely determined by the assumption of proportionality of risks (rates). Besides, the relative risk variance tends to increase in calendar time due to an increasing extent of truncation of the baseline rate for late cohorts to eliminate the effect of screening. A Simulation Model Although many characteristics of the above-described model of the natural history of breast cancer can be derived analytically, we developed its simulation counterpart on which to explore the behavior of the basic model under various theoretical scenarios. This simulation model is easier to handle when comparing modeling results with epidemiological indicators in population settings. Another advantage of the simulation approach is that the software can be more readily modified when new elements, such as sensitivity thresholds, need to be incorporated into the basic model structure. Also, the simulation model makes it easier to calculate such important characteristics as the mean lead time (and the corresponding variance) and program sensitivity. The simulation model generates individual histories of cancer development and detection for each birth cohort in accordance with the postulates formulated in “A Stochastic Model of Tumor Latency” and “Modeling the Impact of Screening on the Natural History of Breast Cancer”. The time of tumor onset was generated according to the distribution given by formula [10.1], whereas for the preclinical stage duration W0 the Gompertz distribution given by the second formula in [10.2] was used. The reciprocal of the growth rate was generated from a two-parameter gamma distribution. The effect of screening was modeled as described in “Modeling the Impact of Screening on the Natural History of Breast Cancer” (see “Mammographic Screening” for more details) with the probabilities of detection at the kth screen specified by formula [10.5]. The information on age and tumor size was retrieved after each event of either screen-based or spontaneous detection. The probabilistic characteristics of interest were estimated nonparametrically from the simulated data. The code was written in PASCAL DELPHI. Breast cancer incidence. It is important to make inferences in terms of a characteristic of the model that can be estimated in the presence of data censoring. In a cohort setting, the most natural characteristic to be modeled is the hazard rate h(x) as a function of age x at cancer detection. Under the model of independent censoring, the function h(x) can be estimated from real or simulated data so that the resultant estimate does not depend on competing mortality. Let Uj = [xj−1, xj), then the life-table type estimator of h(x) is given by  ${\hat{h}}(x_{j}){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{events}{\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}}{\mathrm{number}{\,}\mathrm{at}{\,}\mathrm{risk}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}},$ [10.9]so that there is no need to model competing risks explicitly. This is a distinct advantage of this indicator, because invoking independent information on competing mortality would induce an additional random noise in the epidemiological characteristic to be estimated. The estimator for h(x) has desirable asymptotic properties: It is consistent and efficient. The same estimator can be used for mortality. In a population setting, the hazard rate becomes time dependent and needs to be generalized leading to the notion of composite hazard. Let hi(x) be the hazard function for the ith cohort and t be the calendar year. The composite hazard hC(x, t) is defined as  $h^{C}(x,t){=}h_{t{-}x}(x).$ [10.10] Therefore, a pertinent estimator for hC(x, t) is  ${\hat{h}}^{C}(x,t){=}{\hat{h}}_{t{-}x}(x).$ [10.11] The empirical counterpart of hC(x, t) is  $I(x_{j},t){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{events}{\,}(\mathrm{cases}){\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}{\mathrm{number}{\,}\mathrm{at}{\,}\mathrm{risk}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}.$ [10.12] The commonly used indicator (age-specific incidence) is calculated as  $I{\ast}(x_{j},t){=}\frac{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{new}{\,}\mathrm{cases}{\,}\mathrm{in}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}{\mathrm{number}{\,}\mathrm{of}{\,}\mathrm{alive}{\,}\mathrm{by}{\,}\mathrm{the}{\,}\mathrm{start}{\,}\mathrm{of}{\,}\mathrm{U}_{\mathrm{j}}{\,}\mathrm{at}{\,}\mathrm{time}{\,}\mathrm{t}}.$ [10.13] In addition to the risk set, the denominator of [10.13] counts those persons in the age group Uj who have been diagnosed with cancer but are still alive in calendar year t. The estimator I* depends on the effects of data censoring (competing mortality), and there is no meaningful probabilistic characteristic for which the statistic I*(xj, t) could be an unbiased estimator. If one uses I*(xj, t) as an estimator for hC(x, t), the bias remains unknown. However, formulas [10.12] and [10.13] are expected to be numerically close to each other, and for this reason we believe that for all practical purposes the estimator I(xj, t) is well approximated by I*(xj, t). For model calibration and validation, we use the incidence I*(xj, t) and its age-adjusted (to the 2000 U.S. standard population) counterpart as meaningful summary characteristics of the SEER data. The age-adjusted true incidence is defined as  $r(t){=}{{\int}}h^{C}(x,t)\mathrm{{\omega}}_{0}(x)dx,$ [10.14]where ω0(x) is the age distribution in the standard population. When estimating r we replace hC with I*. A model of breast cancer survival. To model mortality rates, we proceed from the following regression model (13–15,17) that relates the survival function, $${\bar{G}},$$ of the postdiagnosis survival time to the values of clinical covariates (age, stage, tumor size) represented by vector z:  ${\bar{G}}(t{\vert}\mathbf{\mathrm{{\beta}}},\mathbf{\mathrm{z}}){=}\mathrm{exp}[{-}\mathrm{{\theta}}(\mathbf{\mathrm{{\beta}}}_{1},\mathbf{\mathrm{z}}){\{}1{-}{\bar{F}}(t)^{\mathrm{{\eta}}(\mathrm{{\beta}}_{2},\mathrm{\mathbf{z}})}{\}}],$ [10.15]where β = (β1, β2), β1 and β2 are vectors of regression coefficients, Ḡ is an arbitrary survival function, and the functions θ and η are each of the form exp(β′z). Formula [10.15] is a natural generalization of the proportional hazards (PH) model with cure (13,15); the latter is a special case of [10.15] with η = 1. A distinct advantage of this model is that each covariate may exert its effect both on long-term survival through θ (z) and on short-term survival through η(z); this effect explains its higher flexibility compared with that of the traditional PH model. The need for extension [10.15] of the PH model is motivated by the fact that the original PH model does not provide a good description of breast cancer survival (9,13,16,44,47). Within a semiparametric framework, the baseline function F̄ is treated as a step function (with jumps at the observed failure times) which is set at zero at the point of last observation. Efficient algorithms are available to fit the semiparametric model [10.15] to survival data (13–15). The model [10.15] has proven to provide an excellent fit to data on breast cancer (13,16,47) and prostate cancer (17) survival. The regression coefficients incorporated into θ (z) and η (z) were estimated from the SEER data by using an algorithm proposed by Tsodikov (13); their numerical values are given in Table 3. In this analysis, we used survival data on more than 165 000 patients diagnosed with breast cancer since 1988. This subset was chosen because it provides the information on tumor size at diagnosis needed for our analysis. Similar estimates of the regression coefficients were obtained when the baseline function F was approximated by a two-parameter Weibull distribution. Table 3.  Regression coefficients estimated from the SEER data on breast cancer survival* Covariate  Coefficient for θ (z)  Coefficient for η (z)  Baseline  β11 = −2.11  β21 = 0  Tumor size  β12 = 3.74 × 10−4  β22 = 6.27 × 10−4  Age at diagnosis  β13 = 5.16 × 10−6  β23 = 5.33 × 10−4  Stage, regional  β14 = 1.30  β24 = 0.41  Stage, distant  β15 = 2.38  β25 = 1.18  Covariate  Coefficient for θ (z)  Coefficient for η (z)  Baseline  β11 = −2.11  β21 = 0  Tumor size  β12 = 3.74 × 10−4  β22 = 6.27 × 10−4  Age at diagnosis  β13 = 5.16 × 10−6  β23 = 5.33 × 10−4  Stage, regional  β14 = 1.30  β24 = 0.41  Stage, distant  β15 = 2.38  β25 = 1.18  * SEER = Surveillance, Epidemiology, and End Results. View Large In the simulation counterpart of our model, we generated a random variable, M, from the conditional survival function [10.15] for each set of covariates produced by the model of cancer detection (age, tumor size, clinical stage), with parameter values estimated from the Canadian studies (after a pertinent calibration of the model), so that the lifetime of each individual is equal to U + M. We did not analyze the CNBSS survival data because of their scarcity. The basic probabilistic characteristics of breast cancer mortality (such as the hazard rate) were estimated nonparametrically from the sample of simulated times U + M. Mammographic screening. Although the model of breast cancer screening was described in sufficient detail earlier, a few further comments are in order here. To specify the initial value of the sensitivity parameter α1 for the base case, we proceeded from its estimate obtained from the CNBSS data on the combined mode of detection, i.e., mammography and physical exam, because in real practice the two medical procedures frequently come together. To make the model of screening more realistic, we introduced threshold values for detectable volumes of tumors. The threshold volume for screening based detection was set at 0.004 cm3, which is the minimum volume observed among screen-detected tumors in the CNBSS dataset. Similarly, a threshold of 0.014 cm3 for spontaneous detection was determined from the CNBSS data after eliminating four smallest values suspected as likely outliers. However, the net results of modeling epidemiological descriptors are not perceptibly affected by the above thresholds. Individual schedules of mammographic examinations were modeled using the dissemination model developed by the NCI. This software generates a screening schedule for each individual pertaining to a given birth cohort. In addition to this sequence of screening ages, each individual history of breast cancer includes random variables T and Λ, as well as the times W0 and W1 of spontaneous and screen-based detection, respectively. Both random variables W0 and W1 are measured from the time of tumor onset. Given T, Λ, and an individual screening schedule τ1, τ2, …, τn, the random variables W0 and W1 are generated using the second of formulas [10.2] and formula [10.5], respectively, which gives a sample value of W = min(W0, W1). The components W0, W1 determining the actual age at detection are only conditionally independent, given the time T of tumor onset. Therefore, these components cannot be manipulated independently to achieve a better fit to the observed data. Once the age at tumor detection U = T + W has been determined, a check is made as to whether its value exceeds the maximum allowable age in a given cohort. If it does not, the size of the detected tumor is recorded. Thus, the output of our simulations is represented by the pairs U, S. The clinical stage (local, regional, distant) is generated conditionally on this output from a distribution estimated from the SEER data, yielding triples of quantities that are necessary to construct the most basic epidemiological indicators. MODELING EFFECTS OF TREATMENT As described earlier in this report, the effect of early detection on mortality was modeled through the regression coefficients β1and β2 characterizing the contributions of age, tumor size, and clinical stage to short- and long-term survival effects, respectively. Maximum likelihood estimates of these coefficients were obtained from the SEER data on postdetection survival of patients with breast cancer diagnosed after 1988. This time interval is characterized by a widespread use of novel modes of adjuvant therapy for breast cancer, first and foremost of those associated with the advent of tamoxifen. When modeling the base case, however, one needs to cover the whole interval between 1975 and 1999. Therefore, using the coefficients β1 and β2 thus estimated would result in a significantly lower mortality than that was actually observed. The SEER data do not provide the necessary information on breast cancer treatment so that the effect of tamoxifen and other advancements in breast cancer treatment has to be modeled by the indirect route. One way of doing this is to calibrate the model by introducing two additional time-dependent covariates zθ and zη and the corresponding scaling parameters exp(cθzθ) and exp(cηzη) that modify the short- and long-term survival effects by acting multiplicatively on the functions $$\mathrm{{\theta}}(\mathbf{\mathrm{{\beta}}}_{1},\mathbf{\mathrm{z}}){=}\mathrm{exp}(\mathbf{\mathrm{{\beta}}}{^\prime}_{\mathbf{1}}\mathbf{\mathrm{z}})$$ and $$\mathrm{{\eta}}(\mathbf{\mathrm{{\beta}}}_{2},\mathbf{\mathrm{z}}){=}\mathrm{exp}(\mathbf{\mathrm{{\beta}}}{^\prime}_{\mathbf{2}}\mathbf{\mathrm{z}})$$ in formula [10.15]. The effect of treatment on breast cancer mortality needs to be modeled as a function of calendar time, t, to reflect the dissemination of tamoxifen and other therapy improvements. To retain identifiability of the model, we assume that there is a change point t0 (calendar year) so that zθ = 1, zη = 1 for t < t0 and zθ = zη = 0 for t ≥ t0. Thus, we introduce the simplest stepwise dependence of the effect of treatment on calendar time. This model will be referred to as Model 1. This gives us three more parameters cθ, cη and t0 to calibrate the model of breast cancer mortality. The rationale for model calibration is discussed in the next section. Using the SEER data we obtained the following least squares estimates: cθ = 2.65, cη = −3.05 and t0 = 1980. These values provide a reasonably good fit to the observed breast cancer mortality over 1975–1999 (Fig. 4), and at the same time they serve as auxiliary quantitative characteristics of the contribution of therapy advancements (including tamoxifen) to breast cancer survival. In particular, extending the estimated values of cθ and cη to the period t ≥ t0 would yield a mortality rate that would be observed with no use of tamoxifen (no improvements in treatment), while setting cθ = cη = 0 for all t one could predict a mortality rate that would have been observed had the contemporary modes of treatment been in effect since the beginning of the twentieth century. A more realistic description of the observed mortality trend can be provided by introducing a gradual advent of improved treatments (better surgical procedures, improved irradiation regimens, adjuvant chemotherapy, patient care, etc) that has begun before 1975. A simple model (Model 2) is derived by assuming that both zθ (t) and zη (t) are linearly decreasing functions such that zθ (t1) = zη (t1) = 1 and zθ (t2) = zη (t2) = 0, where t1 < 1975 and t2 > 1975. As shown in Fig. 5, a nearly perfect fit to the observed breast cancer mortality is provided by this model with t1 = 1960 and t2 = 1990. MODEL VALIDATION General Principles Assessing goodness of fit for the model described above is difficult because of its complex and multivariate structure. There are no theoretically based statistical methods of goodness-of-fit testing for the bivariate distribution given by formula [10.4], whereas resampling and cross-validation techniques are computationally prohibitive with a model of such complexity. Data censoring and truncation also stand in the way as far as the CNBSS data are concerned. The CNBSS data are heterogeneous with respect to individual screening schedules, which is why nonparametric estimators of such important quantities as the distribution of tumor size at detection or its mean value may be biased in finite samples, thereby causing more complications in goodness-of-fit testing. For all these reasons we use the base case only to validate the model. In doing so, the CNBSS data will serve as a training set, whereas the SEER data will be treated as a control sample. This validation design is typical of supervised learning methods. Unlike situations in discriminant analysis, where outcome variables are categorical, we have to compare two continuous functions representing parametric (model based) and nonparametric estimates of the epidemiological indicator of interest. Statistical goodness-of-fit tests are of little utility in comparing the expected values predicted by the model with the observed values of epidemiological indicators in the base case from the following considerations: In large-sample studies, goodness-of-fit tests may be overly conservative rejecting any reasonable (no model is perfect) model. Even if one is prepared to assume the Poisson error structure [which is not a plausible hypothesis in the presence of screening (2)], it is still extremely difficult to make use of asymptotic results for the sampling distribution of a statistic based on residuals, because the parameters are not estimated from the same data. For example, the asymptotic sampling distribution of the chi-square statistic becomes complicated when a distribution with parameters estimated from one set of data is tested for goodness of fit to some other set (45). Therefore, we rely on graphical methods based on residuals characterizing the discrepancy between the observed population-based indicators (rates) and their values predicted by the best-fit model. When estimating model parameters from a given set of data, there is always a danger of overfitting, that is, fitting overly specific patterns that do not extend to new samples. This kind of overfitting has to do with model flexibility; it may manifest itself even if a model is identifiable and all its parameters are properly estimated [see (46) for discussion of the difference between the explained variation and predictive properties of a model]. The phenomenon of overfitting is also known in regression analysis as the shrinkage effect—which is why the model needs to be calibrated when tested against the control sample. Calibration should not end up with the reestimation of all parameters from the control data set; otherwise, no conclusion regarding predictive qualities can be made. In other words, a calibration procedure should be as parsimonious as possible. We require also that at least some predictions be made with no further calibration of the model. Calibration of Model There are two principles of parsimony we tried to follow in this work: Calibration may be applied to a given parameter if there are biological grounds to believe that this parameter may indeed vary between the two settings under comparison (e.g., variations in risk factors, sensitivity of screening procedures). A calibration procedure may also involve those parameters that cannot be estimated from the training set for the lack of relevant data. The number of parameters involved in calibration should be kept to a minimum. In our calibration procedure, the mean growth rate μ was fixed at its value of 0.526 obtained from the CNBSS data. However, we included σ in the procedure, because we expected more heterogeneity in the population-based SEER data than in the CNBSS dataset generated by controlled screening trials. The sensitivity parameters α0 and α1 may also vary between the two sets of data. To meet the second requirement, we can take advantage of some properties of the model described below. These properties have to do with a relatively low sensitivity of some epidemiological indicators to a certain subset of parameters. To calibrate the model we used the so-called incidence size distribution defined as follows. Let rj(t) be the age-adjusted incidence for the jth tumor size category (range), j = 1, …, k, then the incidence size distribution at time t is given by  $\mathrm{{\phi}}(j,t){=}\frac{r_{j}(t)}{{\sum}_{i{=}1}^{k}r_{i}(t)},$ [10.16]where t is calendar time. Our calibration procedure involves the following steps: Step 1. Since the distribution φ(j, t) for t = 1975 is practically insensitive to the parameters ρi characterizing the birth cohort effect (this conjecture was corroborated by computer simulations) and the effect of screening (reflected in the parameter α1) is expected to be negligibly small in 1975, we fit the model to the observed size distribution (three size categories) by minimizing the sum of squared residuals with respect to only two parameters: α0 and σ, while setting α1 = 0. Step 2. In 1999, we expect the size distribution to depend predominantly on α1. Therefore, we fit this distribution by the method of least squares, changing only α1 and setting the parameters α0 and σ at their values resulted from Step 1. Step 3. We repeat Step 1 with the newly estimated α1 and then proceed to Step 2. We alternate between the first two steps until a stable solution is obtained; just two iterations are normally needed to obtain such a solution. Clearly this algorithm can be improved by sequentially including more time points in the objective function when alternating between the two steps. In our preliminary studies, we used only the simplest version of the algorithm. Remark 4. The model-based marginal distribution of tumor size evaluated at a given time point (say, at t = 1975) is no longer a Pareto distribution even in the absence of screening, because it involves the condition that the age at detection does not exceed a certain value. This is all the more so for the distribution φ(j, t). Therefore, it is not recommended to use the Pareto approximation in Step 1 of the above algorithm. The above calibration procedure was applied to the SEER data on invasive breast cancer (excluding all in situ tumors) with all ages included in the adjustment of the age-specific incidence to the 2000 U.S. standard population. The resultant estimate σ = 0.60 indicates that the distribution of tumor growth rate may be slightly overdispersed. The estimate of σ is slightly larger than the maximum likelihood estimate of $$\mathrm{{\hat710{{\sigma}}}}{=}0.53$$ from the CNBSS data; the observed tendency is consistent with the fact that the SEER data are more heterogeneous than the CNBSS data. The sensitivity parameters α0 and α1 were estimated as 4.48 × 10−10 and 8.30 × 10−7, respectively. It is just natural that the calibrated parameter α0 tends to be slightly smaller than its maximum likelihood estimate obtained from the CNBSS, because all participants in the latter study received self-examination instruction. However, a much higher value of α1 still awaits interpretation (see “Discussion”). The comparison of the size distributions resulted from this procedure and their empirical counterparts is shown in Table 4. Table 4.  Model fit to the incidence size distribution*   1975     1999     Tumor size (diameter, cm)  Observed (%) (SEER)  Model (%)  Observed (%) (SEER)  Model (%)  <2  32.94  32.81  59.24  55.05  2–4.9  51.73  52.23  31.75  32.74  ≥5  15.27  14.96  9.01  12.21    1975     1999     Tumor size (diameter, cm)  Observed (%) (SEER)  Model (%)  Observed (%) (SEER)  Model (%)  <2  32.94  32.81  59.24  55.05  2–4.9  51.73  52.23  31.75  32.74  ≥5  15.27  14.96  9.01  12.21  * SEER = Surveillance, Epidemiology, and End Results. View Large The initial values of ρi (initiation rates) for each cohort were obtained by our analysis of the CNBSS data on age and tumor size at detection followed by the application of the age–cohort model. Almost no calibration (Kρ = 1.02, see below for definition) was necessary of the size-specific incidence (for tumors of known size) with respect to these parameters. Therefore, the results of our predictions pertaining to the size-specific incidence (see “Predictive Properties”) were effectively obtained using the maximum likelihood estimates of ρi for the four birth cohorts in the CNBSS data and the relative risks estimated under the age–cohort model. However, the situation is not the same for the total age-adjusted incidence that includes counts of tumors with missing size information. To predict this epidemiological descriptor, an additional calibration of the model with respect to ρi is absolutely necessary. This process amounts to imputation of missing data on the number of cases with unknown tumor sizes. Indeed, it is impossible for a model based on tumor size at detection to provide a description of the contribution of cases with unknown sizes to the overall incidence, because the model requires that the total age-adjusted incidence be equal to the sum of size-specific age-adjusted incidence curves. To keep the extent of this additional calibration to a minimum, all parameters ρi were multiplied by the same (independent of i) scaling factor, Kρ, chosen as a minimizer of the corresponding sum of squared residuals. This scaling procedure appears to have no tangible effect on the other parameters and on the quality of our predictions, so that no further tuning of the model is warranted. Thus, the parameter Kρ plays essentially the same role as the shrinkage factor in the predictive regression analysis. For the total age-adjusted incidence we report Kρ = 1.14, whose value was obtained by calibration of the model to fit the observed incidence that includes cases with unknown tumor size. The total age-adjusted incidence is not the only example where the calibration with respect to ρi may be required to compensate for missing information; there may be other cases (where the results of modeling are extrapolated to another data set) calling for such a calibration. Predictive Properties Now we can validate the model by predicting (with no further calibration or tuning) certain quantitative characteristics obtained from the SEER data. In particular, we would like to predict the dynamics of the following indicators: Size-specific (three size categories) and stage-specific age-adjusted (all ages) incidence curves as functions of calendar time as well as the total (excluding tumors of unknown size) age-adjusted incidence of malignant breast cancer over 1975–1999. Age-specific incidence for cases of invasive breast cancer with known tumor sizes. As is obvious from Figure 2, the model well describes the size-specific age-adjusted breast cancer incidence at fixed values of all parameters. Shown in Fig. 3 are the stage-specific (three stages) age-adjusted (all ages) incidence and the total (excluding unstaged tumors) age-adjusted incidence of malignant breast cancer. The mechanism generating missing data on tumor size is not purely random and appears to depend on calendar time, which is why we need to identify and eliminate such cases from the SEER data rather than attempting to model this mechanism. Fig. 4 shows sample predictions of the age-specific incidence (cases with known tumor size) which appear to be surprisingly good, given that the parameters α0 and α1 were held constant across all age groups and none of these curves was used for calibration; the model was calibrated only to the incidence size distribution at two time points. The results for other years (not shown because of space limitations) are similar. The only notable discrepancy observed in 1999 is somehow related to the fact that the age-adjusted incidence displays some sort of irregular behavior in the vicinity of this time point (see Fig. 2). Fig. 2. View largeDownload slide Predicting size-specific age-adjusted (all ages) breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. Fig. 2. View largeDownload slide Predicting size-specific age-adjusted (all ages) breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. Fig. 3. View largeDownload slide Predicting stage-specific age-adjusted (all ages) breast cancer incidence. Model predictions (solid lines); observed incidence curves (dashed lines). Only invasive tumors of known stage are included. Fig. 3. View largeDownload slide Predicting stage-specific age-adjusted (all ages) breast cancer incidence. Model predictions (solid lines); observed incidence curves (dashed lines). Only invasive tumors of known stage are included. Fig. 4. View largeDownload slide Predicting age-specific breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. The same notation as in Figs. 2 and 3 is used. Fig. 4. View largeDownload slide Predicting age-specific breast cancer incidence at fixed parameters of the model. Only invasive tumors of known size are included. The same notation as in Figs. 2 and 3 is used. Figure 5 shows how the model fits the observed breast cancer mortality. Although it is clear that breast cancer incidence continues to increase after 1975, the mortality curve is flat for a period of 15 years. It is impossible to explain these trends by screening by treatment interactions in view of the fact that such an effect may show up only after a time delay. In contrast, incremental improvements in therapy (before and after 1975) provide a likely explanation. It is seen from Fig. 5 that Model 2 improves the fit dramatically in comparison to Model 1 as far as the early portion of the mortality curve is concerned. Recall that Model 1 assumes a stepwise change in treatment efficacy occurring at some time point t0, while a more gradual (linear) trend is incorporated into Model 2. As is obvious from Fig. 5, the effects of screening by treatment interactions begin manifesting themselves in mortality after 1990. These results clearly demonstrate that the model captures the most salient features of the processes under study. Fig. 5. View largeDownload slide Age-adjusted (30–79 years) breast cancer mortality for the period between 1975 and 1999. SEER = Surveillance, Epidemiology, and End Results. Fig. 5. View largeDownload slide Age-adjusted (30–79 years) breast cancer mortality for the period between 1975 and 1999. SEER = Surveillance, Epidemiology, and End Results. DISCUSSION We begin our discussion by quoting Clayton and Schifflers (1): It is the purpose of statistical analysis to extract from research data the maximum information in as parsimonious and comprehensive manner as possible. Although absolutely valid, this statement places two conflicting requirements upon model-based statistical inference. For a model to be useful, its complexity must be adequate for the information contained in the data to be analyzed. A mathematical or simulation model whose parameters are not identifiable is of no use for data analysis, unless a proper reparameterization results in identifiable combinations of model parameters. If such combinations cannot be found, more sources of information need to be used to overcome this difficulty. Wherever possible, a theoretical proof of model identifiability should be provided. Alternatively, numerical or simulation studies are needed to show that the model is not overparameterized and is sensitive enough to parameter values to allow for estimation of its parameters from real data. A model must be sufficiently simple to meet the above requirements. At the same time, it should be flexible enough to provide a good description of heterogeneous data sets. The approach presented here appears to satisfy both requirements, thereby representing the desired compromise between identifiability and flexibility of the proposed model. We use a fully parametric model of the natural history of cancer for making maximum likelihood inferences from randomized screening trials. Having estimated parameters of the model from such data, one can use a simulation counterpart of the same model to predict various indicators associated with breast cancer incidence and mortality in the general population under different screening scenarios. In predictive settings, where model parameters are estimated from some other dataset, calibration is necessary to mitigate the effect of overfitting. This is an important step in an attempt to extrapolate the initial parametric inference from a randomized trial to a population-based setting. The proposed model structure is well suited to calibrate the model in a parsimonious and biologically meaningful way. This goal is accomplished through designing a stepwise fitting procedure so that the parameters α0 and σ are chosen to fit the tumor size distribution observed when the dissemination of mammography is believed to be low, while the parameter α1 is estimated to fit the same distribution at the end of the observation period. Relative insensitivity of the size distribution to certain subsets of parameters helps design such a procedure. The calibration procedure thus designed can also be viewed as a method for estimating some parameters of the natural history from data on cancer incidence in the general population. For example, the mean growth rate μ can also be estimated from the incidence size distribution (Step 2 of the proposed algorithm), which may improve the fit shown in Table 4 for t = 1999. Just to make our validation procedure as strict as possible, we intentionally refrained from adjusting the parameter μ. However, estimation of the parameters incorporated into the onset time distribution, like ρ, A, and B, calls for cohort observations. Randomized trials represent the best designed cohort studies, which is why we combine both types of parametric inference in the analysis of cancer incidence. The value of α1 estimated from the SEER data appears to be much higher than its maximum likelihood estimate obtained from the Canadian study. This discrepancy can be attributed to the fact that the CNBSS data include in situ tumors, while the calibrated parameter α1 refers to invasive breast cancer. Yet another possibility is that the NCI model of mammography dissemination may underestimate the actual intensity of screening, so that the model compensates for this bias yielding a higher value of the sensitivity parameter α1. The latter explanation is speculative, of course. The model shows an excellent description of the observed breast cancer incidence and mortality in the U.S. population. The mortality trend is consistent with the assumption that there has been a relatively long history of incremental improvements in breast cancer treatment. There are two reasons why we refrained from using the results of meta-analysis based on the proportional hazards model to explicitly describe the effect of adjuvant chemotherapy on breast cancer mortality. First, we know that the Cox model does not provide a good description of covariate effects on breast cancer survival (9,13,16,44,47). Second, one can see that the age-adjusted mortality curve is flat for 15 years (beginning from 1975), whereas breast cancer incidence continues to increase. This pattern indicates that improvements in breast cancer treatment began manifesting themselves before the start of any appreciable dissemination of mammography in the U.S. population. A similar observation was recently reported for breast cancer mortality in the United Kingdom (18). The contribution of screening to the observed decline in mortality appears to be rather weak under our model (Fig. 6). The main point here is that the actual dissemination of screening in the U.S. population is too low for a tangible survival benefit from mammography due to screening by treatment interactions. It is also quite low in screening trials because of a narrow range of screening ages and just a few scheduled examinations. If the survival benefit of screening in randomized trials were truly strong it would inevitably be seen far beyond the well-known breast cancer screening controversy (48). To appreciate such a benefit, the target population must be subjected to a much more intensive screening. To demonstrate this, we ran the model in a way that mimics annual screening of all women older than 30 over 1975–1999 (special run). In this run, the percent decline in mortality due to screening is expected to be more than 19.8% by 1999 (Fig. 6). Thus, the model indicates a significant benefit of breast cancer screening providing its dissemination is sufficiently intensive. The estimated mean lead time is 2.06 years (all ages) and does not change much in the special run. Fig. 6. View largeDownload slide Predicting breast cancer mortality (Model 2) in the absence or presence of screening. Fig. 6. View largeDownload slide Predicting breast cancer mortality (Model 2) in the absence or presence of screening. However nice a final fit may be, it is not enough for model validation. One needs to evaluate predictive properties of the model under study in a situation where no further calibration is allowed. In some predictive settings (e.g., where missing information is included in the indicator to be predicted; see “Calibration of the Model”) there is no way to obviate the need for an additional calibration, although such situations should be avoided whenever possible. If such a calibration appears to be unavoidable, it should involve as few parameters as possible. We failed to explain the observed increase of breast cancer incidence by mammography dissemination alone. At no reasonable parameter values does the model fit the data after removing the birth cohort effect. This shows that the model is realistic enough to reject unrealistic scenarios. APPENDIX: BASIC NOTATION T - age of an individual at tumor onset; A and B - parameters incorporated into the distribution of T; ρ - ratio of the initiation rate and the rate of proliferation of initiated cells; ρi - parameter ρ for the ith birth cohort; W - time interval between T and the age at tumor detection; U - U = T + W; S - tumor size at detection; λ - rate of exponential tumor growth; ∧57420; - random rate of tumor growth; μ - expected value of 1/∧; σ - standard deviation of 1/∧; α0 - sensitivity parameter (proportionality coefficient in a quantal response model) for clinical detection; α1 - sensitivity parameter (proportionality coefficient in a quantal response model) for mammography + physical exam; α2 - sensitivity parameter (proportionality coefficient in a quantal response model) for physical examination alone; M - postdetection survival time; G(·) - survival time cumulative distribution function; $${\bar{G}}({\cdot})$$ - survival function: Ḡ $$({\cdot})\ {=}\ 1{-}G({\cdot})$$ ; z - vector of covariates; β - vector of regression coefficients; t0 - change point in calendar time; cθ, cη - calibration coefficients. Supported by NIH/NCI grant U01 CA88177. Some analyses reported in the paper were supported by the Utah Population Data Base and the Utah Cancer Registry funded by contract NO1-PC-67000 from the NCI with additional support from the Utah State Department of Health and the University of Utah. We thank Dr. A. D. Tsodikov (University of California–Davis) for his help in obtaining the estimates presented in Table 3 and valuable comments. We thank Drs. K. Cronin, E. Feuer, and A. Mariotto, who generously shared their time, knowledge, and experience in helping us gain a better understanding of many scientific and practical issues related to this research effort. We are also grateful to the reviewers for their open-mindedness and truly helpful comments. References (1) Clayton D, Schifflers E. Models for temporal variation in cancer rates. II: Age-period-cohort models. Statistics in Medicine  1987b; 6: 469–71. Google Scholar (2) Hanin LG, Yakovlev AY. Multivariate distributions of clinical covariates at the time of cancer detection. Stat Methods Med Res  2004; 13: 457–89. Google Scholar (3) Zelen, M. A hypothesis for the natural time history of breast cancer. Cancer Research  1968; 28: 207–16. Google Scholar (4) Feldstein M, Zelen M. Inferring the natural time history of breast cancer: implications for tumor growth rate and early detection. Breast Cancer Res Treat  1984; 4: 3–10. Google Scholar (5) Blumenson LE, Bross ID. A mathematical analysis of the growth and spread of breast cancer. Biometrics  1969; 22: 95–109. Google Scholar (6) Schwartz M. An analysis of the benefits of serial screening for breast cancer based upon a mathematical model of the disease. Cancer  1978; 41: 1550–64. Google Scholar (7) Schwartz M. A mathematical model used to analyse breast cancer screening strategies. Oper Res  1978; 26: 937–55. Google Scholar (8) Baker SG, Erwin D, Kramer BS, Prorok PC. Using observational data to estimate an upper bound on the reduction in cancer mortality due to periodic screening, BMC Med Res Methodol  2003; 3: 4. Available at: http://www.biomedcentral.com/1471-2288/3/4. Google Scholar (9) Yakovlev AY, Tsodikov AD. Stochastic models of tumor latency and their biostatistical applications. Singapore: World Scientific; 1996. Google Scholar (10) Asselain B, Fourquet A, Hoang T, Tsodikov AD, Yakovlev AY. A parametric regression model of tumor recurrence: an application to the analysis of clinical data on breast cancer. Stat Probabil Lett  1996; 29: 271–8. Google Scholar (11) Ibrahim JG, Chen MH, Sinha D. Bayesian survival analysis. New York (NY): Springer; 2001. Google Scholar (12) Ibrahim JG, Chen MH, Sinha D. Bayesian semi-parametric models for survival data with a cure fraction. Biometrics  2001; 57: 383–8. Google Scholar (13) Tsodikov A. Semiparametric models of long- and short-term survival: an application to the analysis of breast cancer survival in Utah by age and stage. Stat Med  2002; 21: 895–920. Google Scholar (14) Tsodikov A. Semiparametric models: a generalized self-consistency approach. J R Stat Soc Ser B  2003; 65: 759–74. Google Scholar (15) Tsodikov AD, Ibrahim JG, Yakovlev AY. Estimating cure rates from survival data: an alternative to two-component mixture models. JASA  2003; 98: 1063–78. Google Scholar (16) Yakovlev AY, Tsodikov AD, Boucher K, Kerber R. The shape of the hazard function in breast carcinoma: curability of the disease revisited. Cancer  1999; 85; 1789–98. Google Scholar (17) Zaider M, Zelefsky MJ, Hanin LG, Tsodikov AD, Yakovlev AY, Leibel SA. A survival model for fractionated radiotherapy with an application to prostate cancer. Phys Med Biol  2001; 46: 2745–58. Google Scholar (18) Kobayashi S. What caused the decline in breast cancer mortality in the United Kingdom? Breast Cancer  2004; 11: 156–9. Google Scholar (19) Albert A, Gertman PM, Louis TA, Liu SI. Screening for the early detection of cancer. 2. The impact of the screening on the natural history of the disease. Math Biosci  1978; 40: 61–109. Google Scholar (20) Bartoszyński R, Edler L, Hanin L, Kopp-Schneider A, Pavlova L, Tsodikov A, Zorin, A, Yakovlev A. Modeling cancer detection: tumor size as a source of information on unobservable stages of carcinogenesis. Math Biosci  2001; 171: 113–42. Google Scholar (21) Moolgavkar SH, Venzon DJ. Two event model for carcinogenesis: Incidence curves for childhood and adult tumors. Math Biosci  1979; 47: 55–77. Google Scholar (22) Moolgavkar SH, Knudson AG. Mutation and cancer: a model for human carcinogenesis. J Natl Cancer Inst  1981; 66: 1037–52. Google Scholar (23) Moolgavkar SH, Luebeck EG. Two-event model for carcinogenesis: Biological, mathematical and statistical considerations. Risk Anal  1990; 10: 323–41. Google Scholar (24) Heidenreich WF. On the parameters of the clonal expansion model. Radiat Environ Biophys  1996; 35: 127–9. Google Scholar (25) Hanin LG, Yakovlev AY. A nonidentifiability aspect of the two-stage model of carcinogenesis. Risk Anal  1996;16: 5: 711–5. Google Scholar (26) Heidenreich WF, Luebeck EG, Moolgavkar SH. Some properties of the hazard function of the two-mutation clonal expansion model. Risk Anal  1997; 17: 391–9. Google Scholar (27) Gregori G, Hanin L, Luebeck G, Moolgavkar S, Yakovlev A. Testing goodness of fit with stochastic models of carcinogenesis. Math Biosci  2001; 175: 13–29. Google Scholar (28) Hanin L. Identification problem for stochastic models with application to carcinogenesis, cancer detection and radiation biology. Discrete Dynamics Nat Soc  2002; 7: 177–89. Google Scholar (29) Zorin AV, Edler L, Hanin LG, Yakovlev AY. Estimating the natural history of breast cancer from bivariate data on age and tumor size at diagnosis. In: Quantitative Methods for Cancer and Human Health Risk Assessment, L. Edler and C.P. Kitsos, editors. New York (NY): Wiley; 2005. pp. 317–27. Google Scholar (30) Hanin LG, Yakovlev AY. Identifiability of the joint distribution of age and tumor size at detection in the presence of screening, Math Biosci. 2004, submitted. Google Scholar (31) Mandelblatt J, Saha S, Teutsch S, Hoerger T, Siu AL, Atkins D, et al. A systematic review: the cost-effectiveness of screening mammography beyond age 65. Ann Intern Med  2003; 139: 835–42. Google Scholar (32) Hanin LG, Tsodikov AD, Yakovlev AY. Optimal schedules of cancer surveillance and tumor size at Detection. Math Comput Model  2001; 33: 1419–30. Google Scholar (33) Klein JP, Moeschberger ML. Survival analysis: techniques for censored and truncated data. Springer Series in Statistics for Biology and Health. New York (NY): Springer; 1997. Google Scholar (34) Miller AB, Howe GR, Wall C. The national study of breast cancer screening. Clin Invest Med  1981; 4: 227–58. Google Scholar (35) Miller AB, Baines CJ, To T, Wall C. Canadian national breast screening study: 1. Breast cancer detection and death rates among women age 40–49 years. Can Med Assoc J  1992a; 147: 1459–76 (published erratum in Can Med Assoc J 1993;148:718). Google Scholar (36) Miller AB, To T, Baines CJ, Wall C. The Canadian National Breast Screening Study—1. A randomized screening trial of mammography in women age 40–49: Breast cancer mortality after 11–16 years of follow-up. Ann Intern Med  2002; 137:5: 305–12. Google Scholar (37) Miller AB, Baines CJ, To T, Wall C. Canadian National Breast Screening Study 2. Breast cancer detection and death rates among women aged 50 to 59 years. Can Med Assoc J  1992b; 147: 1477–88 (published erratum in Can Med Assoc J 1993;148:718). Google Scholar (38) Miller AB, To T, Baines, CJ, Wall C. Canadian National Breast Screening Study 2: 13-year results of a randomized trial in women age 50–59 years. J Natl Cancer Inst  2002; 92: 1490–9. Google Scholar (39) Pflug, G. C. Optimization of stochastic models: the interface between simulation and optimization. Boston (MA): Kluwer Academic Publishers; 1996. Google Scholar (40) Boucher KM, Kerber RA. The shape of the hazard function for cancer incidence. Math Comput Model  2001; 33: 1361–76. Google Scholar (41) Clayton D, Schifflers E. Models for temporal variation in cancer rates. I: Age-period and age-cohort models. Stat Med  1987; 6: 449–67. Google Scholar (42) Wun LM, Feuer EJ, Miller BA. Are increases in mammographic screening still a valid explanation for trends in breast cancer incidence in the United States? Cancer Causes Control  1995; 6: 135–44. Google Scholar (43) Tarone RE, Chu KC. Age-period-cohort analyses of breast-, ovarian-, endometrial- and cervical-cancer mortality rates for Caucasian women in the USA. J Epidemiol Biostat  2000; 5: 221–31. Google Scholar (44) Pocock SJ, Gore SM, Kerr GR. Long-term survival analysis: the curability of breast cancer. Stat Med  1982; 1: 93–104. Google Scholar (45) Greenwood PE, Nikulin MS. A guide to chi-squared testing, New York (NY): Wiley Interscience; 1996. Google Scholar (46) Verweij PJM, Van Houwelingen HC. Cross-validation in survival analysis. Stat Med  1993; 12: 2305–14. Google Scholar (47) Boucher K, Asselain B, Tsodikov AD, Yakovlev AY. Semiparametric versus parametric regression analysis based on the bounded cumulative hazard model: an application to breast cancer recurrence. In: Nikulin M, Balakrishnan N, Mesbah M, Limnious N, eds., Semiparametric models and applications to reliability, survival analysis and quality of life. Birhäuser; 2004. pp. 399–418. Google Scholar (48) Olsen O, Gotzsche PC. Cochrane review on screening for breast cancer with mammography. Lancet  2001; 358: 1340–2. Google Scholar © The Author 2006. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oxfordjournals.org. ### Journal JNCI MonographsOxford University Press Published: Oct 1, 2006
{}
## anonymous one year ago The angle of depression from the top of the tree to the tip of the shadow is 25°. Find the height of the tree to the nearest tenth. a. 16.8 feet b. 18.2 feet c. 23.4 feet d. 39.7 feet 1. anonymous 2. anonymous 3. freckles |dw:1437363564659:dw| just use the tangent ratio 4. freckles though that is the angle of elevation not angle of depression 5. anonymous how do i solve it what equation 6. freckles alpha is the angle of elevation which is 25 deg recall tan is opp divided by adjacent 7. anonymous 8. freckles since it does say angle of depression I wonder if it is talking about this angle being 25 deg|dw:1437363976910:dw| 9. freckles |dw:1437364051842:dw| but that still makes that one angle 25 deg 10. anonymous so what do i divide by 11. freckles so it didn't matter 12. freckles you just do $\tan(25)=\frac{\text{ opp side of the angle with measurement } 25 \deg}{ \text{ adj side of the angle with measurement } 25 \deg}$ 13. freckles the opp side is what I called h in the picture 14. freckles anyways let me know if you need further help 15. anonymous i dont know what the measurement is 16. freckles which one? 17. anonymous h 18. freckles h is what you are solving for 19. freckles remember h is the height of the tree 20. freckles that is what you are solving the equation for 21. freckles you are given the adjacent side the side that touches the angle (and no adjacent side is not the hyp) 22. freckles |dw:1437364301549:dw| 23. anonymous so what next 24. freckles pluggin those numbers into equation I gave 25. freckles then just solving it for h and then bingo you are done 26. triciaal use the equation given above by @freckles 27. freckles $\tan(25)=\frac{\text{ opp side of the angle with measurement } 25 \deg}{ \text{ adj side of the angle with measurement } 25 \deg} \\ \tan(25)=\frac{h}{36}$ notice I just put what I called the opp where I said opp side notice I just put what I called the adjacent measurement for the adjacent side can you solve the equation? 28. triciaal |dw:1437364501206:dw| 29. triciaal did you get A? 30. anonymous i didnt get anything yet 31. anonymous what i got 900. i know that isnt right 32. freckles ok @GEOMETRYHELP10011 do you know how to do tan(25) in your calculator ? 33. anonymous no 34. freckles oh that could be a big problem for solving the equation 35. freckles because it does require you to type tan(25) in your calculator 36. freckles you should see a button with tan on it 37. freckles you should see it next to things like sin and cos 38. freckles if you can't find them tell me the name of your calculator and I can tell you where it is 39. anonymous i got at 16.7 40. freckles ok that's great 41. freckles that is what I and @triciaal also got 42. freckles well 16.8 43. triciaal to round to the nearest tenth solve to 2 decimals you have 16.78 then look at the 2nd place if 5 or more add 1 to the digit before 8 is more than 5 so you round to 16.8 if the 2nd digit was less than 5 then you would drop it and round to 16.7
{}
Chemistry AROMATIC HYDROCARBONS ### Topics to be covered => Aromatic hydrocarbon => Nomenclature and isomerism => Structure of benzene => Resonance and stability of benzene => Aromaticity => Preparation of benzene => Physical properties => Chemical properties ### AROMATIC HYDROCARBON ★ These hydrocarbons are also known as color{green}("‘𝐚𝐫𝐞𝐧𝐞𝐬’.") ★ Since most of them possess pleasant odour (𝐆𝐫𝐞𝐞𝐤; 𝐚𝐫𝐨𝐦𝐚 𝐦𝐞𝐚𝐧𝐢𝐧𝐠 𝐩𝐥𝐞𝐚𝐬𝐚𝐧𝐭 𝐬𝐦𝐞𝐥𝐥𝐢𝐧𝐠), the class of compounds was named as color{green}("‘𝐚𝐫𝐨𝐦𝐚𝐭𝐢𝐜 𝐜𝐨𝐦𝐩𝐨𝐮𝐧𝐝𝐬’.") ★ Benzene ring is highly unsaturated but in a majority of reactions of aromatic compounds, the unsaturation of benzene ring is retained. ★ Aromatic compounds containing benzene ring are known as benzenoids and those not containing a benzene ring are known as non benzenoids. ★ Some examples of arenes are given below: ### Nomenclature and Isomerism: ★ All six hydrogen atoms in benzene are equivalent; so it forms one and only one type of monosubstituted product. ★ When two hydrogen atoms in benzene are replaced by two similar or different monovalent atoms or groups, three different position isomers are possible. ★ The 1, 2 or 1, 6 is known as the ortho (o–), the 1, 3 or 1, 5 as meta (m–) and the 1, 4 as para (p–) disubstituted compounds. ### Structure of Benzene ★ Benzene was isolated by Michael Faraday in 1825. ★ The molecular formula of benzene, color{red}(C_6H_6), indicates a high degree of unsaturation. ★ Benzene was found to be a stable molecule and found to form a triozonide which indicates the presence of three double bonds. ★ Benzene was further found to produce one and only one monosubstituted derivative which indicated that all the six carbon and six hydrogen atoms of benzene are identical. ★ On the basis of this observation August Kekulé in 1865 proposed the following structure for benzene having cyclic arrangement of six carbon atoms with alternate single and double bonds and one hydrogen atom attached to each carbon atom. ★ The Kekulé structure indicates the possibility of two isomeric 1, 2-dibromobenzenes. In one of the isomers, the bromine atoms are attached to the doubly bonded carbon atoms whereas in the other, they are attached to the singly bonded carbons. ★ However, benzene was found to form only one ortho disubstituted product. This problem was overcome by Kekulé by suggesting the concept of oscillating nature of double bonds in benzene as given below. ★ Even with this modification, Kekulé structure of benzene fails to explain unusual stability and preference to substitution reactions than addition reactions, which could later on be explained by resonance. ### Resonance and stability of benzene ★ Benzene is a hybrid of various resonating structures The hybrid structure is represented by inserting a circle or a dotted circle in the hexagon as shown in (C) representing the delocalization of the six electrons between the six carbon atoms of the benzene ring. ★ 𝐄𝐗𝐏𝐋𝐀𝐍𝐀𝐓𝐈𝐎𝐍 𝐎𝐅 𝐒𝐓𝐑𝐔𝐂𝐓𝐔𝐑𝐄 𝐔𝐒𝐈𝐍𝐆 𝐎𝐑𝐁𝐈𝐓𝐀𝐋 𝐎𝐕𝐄𝐑𝐋𝐀𝐏 𝐂𝐎𝐍𝐂𝐄𝐏𝐓: ★ All the six carbon atoms in benzene are color{red}(sp^2) hybridized. ★ Two color{red}(sp^2) hybrid orbitals of each carbon atom overlap with color{red}(sp^2) hybrid orbitals of adjacent carbon atoms to form six color{red}(C—C) sigma bonds which are in the hexagonal plane. ★ The remaining color{red}(sp^2) hybrid orbital of each carbon atom overlaps with color{red}(s) orbital of a hydrogen atom to form six color{red}(C—H) sigma bonds. ★ Each carbon atom is now left with one unhybridised p orbital perpendicular to the plane of the ring as shown below: ★ The unhybridised color{red}(p) orbital of carbon atoms are close enough to form a π bond by lateral overlap. There are two equal possibilities of forming three color{red}(π) bonds by overlap of color{red}(p) orbitals of color{red}(C_1 –C_2, C_3 – C_4, C_5 – C_6) or color{red}(C_2 – C_3, C_4 – C_5, C_6 – C_1) respectively as shown in the following figures. ★ Structures shown in Fig. 13.7(a) and (b) correspond to two Kekulé’s structure with localised color{red}(π) bonds. ★ X-ray diffraction technique is used for the determination of internuclear distance between all the carbon atoms in the ring and it was found to be the same; there is equal probability for the color{red}(p) orbital of each carbon atom to overlap with the color{red}(p) orbitals of adjacent carbon atoms [Fig. 13.7 (c)]. This can be represented in the form of two doughtnuts (rings) of electron clouds [Fig. 13.7 (d)], one above and one below the plane of the hexagonal ring as shown below: ★ The six color{red}(π) electrons are thus delocalised and can move freely about the six carbon nuclei, instead of any two as shown in Fig. 13.6 (a) or (b). The delocalised color{red}(π) electron cloud is attracted more strongly by the nuclei of the carbon atoms than the electron cloud localised between two carbon atoms. Therefore, presence of delocalised color{red}(π) electrons in benzene makes it more stable than the hypothetical cyclohexatriene. ★ X-Ray diffraction data reveals that benzene is a planar molecule. However, X-ray data indicates that all the six color{red}(C—C) bond lengths are of the same order (139 pm) which is intermediate between color{red}(C— C) single bond (154 pm) and color{red}(C—C) double bond (133 pm). ★ Thus the absence of pure double bond in benzene accounts for the reluctance of benzene to show addition reactions under normal conditions, thus explaining the unusual behaviour of benzene. ### Aromaticity: For a compound to be aromatic it must possess the following characteristics: (i) Planarity (ii) Complete delocalisation of the color{red}(pi) electrons in the ring (iii) Presence of color{red}((4n + 2) π) electrons in the ring where color{red}(n) is an integer (color{red}(n = 0, 1, 2, . . ).). This is often referred to as Hückel Rule. ### Preparation of Benzene color{green}("𝐋𝐚𝐛𝐨𝐫𝐚𝐭𝐨𝐫𝐲 𝐦𝐞𝐭𝐡𝐨𝐝𝐬 :") (i) color{green}("𝐂𝐲𝐜𝐥𝐢𝐜 𝐩𝐨𝐥𝐲𝐦𝐞𝐫𝐢𝐬𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐞𝐭𝐡𝐲𝐧𝐞:") We have already discussed it. (ii) color{green}("𝐃𝐞𝐜𝐚𝐫𝐛𝐨𝐱𝐲𝐥𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐚𝐫𝐨𝐦𝐚𝐭𝐢𝐜 𝐚𝐜𝐢𝐝𝐬:") Sodium salt of benzoic acid on heating with sodalime gives benzene. (iii) color{green}("𝐑𝐞𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐨𝐟 𝐩𝐡𝐞𝐧𝐨𝐥:") Phenol is reduced to benzene by passing its vapours over heated zinc dust. ### Physical properties: ★ Aromatic hydrocarbons are non- polar molecules and are usually colourless liquids or solids with a characteristic aroma. Eg. naphthalene balls are used in toilets and for preservation of clothes because of unique smell of the compound and the moth repellent property. ★ Aromatic hydrocarbons are immiscible with water but are readily miscible with organic solvents. ★ They burn with sooty flame.rbons are immiscible with water but are readily miscible with organic solvents. ★ They burn with sooty flame. ### Chemical properties ★ Arenes are characterised by electrophilic substitution reactions. However, under special conditions they can also undergo addition and oxidation reactions. ### Electrophilic substitution reactions: (i) color{green}("𝐍𝐢𝐭𝐫𝐚𝐭𝐢𝐨𝐧:") When benzene is heated with a mixture of concentrated nitric acid and concentrated sulphuric acid (nitrating mixture) a nitro group is introduced into benzene ring. (ii) color{green}("𝐇𝐚𝐥𝐨𝐠𝐞𝐧𝐚𝐭𝐢𝐨𝐧:") Arenes react with halogens in the presence of a Lewis acid like anhydrous color{red}(FeCl_3, FeBr_3) or color{red}(AlCl_3) to yield haloarenes. (iii) color{green}("𝐒𝐮𝐥𝐩𝐡𝐨𝐧𝐚𝐭𝐢𝐨𝐧:") The replacement of a hydrogen atom by a sulphonic acid group in a ring is called sulphonation. It is carried out by heating benzene with fuming sulphuric acid (oleum). (iv) color{green}("𝐅𝐫𝐢𝐞𝐝𝐞𝐥-𝐂𝐫𝐚𝐟𝐭𝐬 𝐚𝐥𝐤𝐲𝐥𝐚𝐭𝐢𝐨𝐧 𝐫𝐞𝐚𝐜𝐭𝐢𝐨𝐧:") When benzene is treated with an alkyl halide in the presence of anhydrous aluminium chloride, alkylbenene is formed. (v) color{green}("𝐅𝐫𝐢𝐞𝐝𝐞𝐥-𝐂𝐫𝐚𝐟𝐭𝐬 𝐚𝐜𝐲𝐥𝐚𝐭𝐢𝐨𝐧 𝐫𝐞𝐚𝐜𝐭𝐢𝐨𝐧:") The reaction of benzene with an acyl halide or acid anhydride in the presence of Lewis acids (color{red}(AlCl_3)) yields acyl benzene. If excess of electrophilic reagent is used, further substitution reaction may take place in which other hydrogen atoms of benzene ring may also be successively replaced by the electrophile. For example, benzene on treatment with excess of chlorine in the presence of anhydrous color{red}(AlCl_3) in dark yields hexachlorobenzene (color{red}(C_6Cl_6)) ### Mechanism of electrophilic substitution reactions According to experimental evidences, SE (S = substitution; E = electrophilic) reactions are supposed to proceed via the following three steps: (𝐚) color{green}("𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐭𝐡𝐞 𝐞𝐥𝐞𝐭𝐫𝐨𝐩𝐡𝐢𝐥𝐞") (𝐛) color{green}("𝐅𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐜𝐚𝐫𝐛𝐨𝐜𝐚𝐭𝐢𝐨𝐧 𝐢𝐧𝐭𝐞𝐫𝐦𝐞𝐝𝐢𝐚𝐭𝐞") (𝐜) color{green}("𝐑𝐞𝐦𝐨𝐯𝐚𝐥 𝐨𝐟 𝐩𝐫𝐨𝐭𝐨𝐧 𝐟𝐫𝐨𝐦 𝐭𝐡𝐞 𝐜𝐚𝐫𝐛𝐨𝐜𝐚𝐭𝐢𝐨𝐧 𝐈𝐧𝐭𝐞𝐫𝐦𝐞𝐝𝐢𝐚𝐭𝐞") (a) color{green}("𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐞𝐥𝐞𝐜𝐭𝐫𝐨𝐩𝐡𝐢𝐥𝐞" E^(⊕)) : During chlorination, alkylation and acylation of benzene, anhydrous color{red}(AlCl_3), being a Lewis acid helps in generation of the elctrophile color{red}(Cl^(⊕), R^(⊕), RC^(⊕)O) (acylium ion) respectively by combining with the attacking reagent. In the case of nitration, the electrophile, nitronium ion, color{red}(overset(+)(N)O_2) is produced by transfer of a proton (from sulphuric acid) to nitric acid in the following manner: Step I It is interesting to note that in the process of generation of nitronium ion, sulphuric acid serves as an acid and nitric acid as a base. Thus, it is a simple acid-base equilibrium. (b) color{green}("𝐅𝐨𝐫𝐦𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐂𝐚𝐫𝐛𝐨𝐜𝐚𝐭𝐢𝐨𝐧 (𝐚𝐫𝐞𝐧𝐢𝐮𝐦 𝐢𝐨𝐧):") Attack of electrophile results in the formation of σ-complex or arenium ion in which one of the carbon is color{red}(sp^3) hybridised. The arenium ion gets stabilised by resonance: Sigma complex or arenium ion loses its aromatic character because delocalisation of electrons stops at sp^3 hybridised carbon. (c) color{green}("𝐑𝐞𝐦𝐨𝐯𝐚𝐥 𝐨𝐟 𝐩𝐫𝐨𝐭𝐨𝐧:") To restore the aromatic character, color{red}(σ )-complex releases proton from color{red}(sp^3) hybridised carbon on attack by color{red}([AlCl_4]^-) (in case of halogenation, alkylation and acylation) and color{red}([HSO_4]^–) (in case of nitration). Under vigorous conditions, i.e., at high temperature and/ or pressure in the presence of nickel catalyst, hydrogenation of benzene gives cyclohexane. Under utra-violet light, three chlorine molecules add to benzene to produce benzene hexachloride, color{red}(C_6H_6Cl_6) which is also called gammaxane. ### Combustion When heated in air, benzene burns with sooty flame producing color{red}(CO_2) and color{red}(H_2O) color{red}(C_6H_6 + 15/2 O_2 → 6CO_2 + 3H_2O) .............(13.82) General combustion reaction for any hydrocarbon may be given by the following chemical equation: color{red}(C_x+H_y + ( x+ y/4) O_2 → xCO_2 + y/2 H_2O ) ......................(13.83)
{}
# Probability of $X$ > $Y$, given $X$, $Y$ being the maximum of distinct Normal Distribution Families Let $$\begin{matrix} X_i & \sim & N(\mu_X, \sigma_X^2) \\ Y_i & \sim & N(\mu_Y, \sigma_Y^2) \\ X & := & \max(X_i, i \in \{ 1,2, ..., n_X\}) \\ Y & := & \max(Y_i, i \in \{ 1,2, ..., n_Y\}) \end{matrix}$$ What is $$\mathbb{P}[X>Y]$$, as a function of $$( \mu_X, \sigma_X, n_X, \mu_Y, \sigma_Y, n_Y )$$? • You presumably wish to assume that $X$ and $Y$ are independent Jul 25 at 2:44 $$Pr(X>Y)=\sum_{k=1}^{n_x}Pr(X_k>max\{Y_i\}, X_k=max\{X_i\})$$ $$Pr(X_k>max\{Y_i\}, X_k=max\{X_i\})=\int_{x_k}Pr(x_k>max\{Y_i\}, x_k=max\{X_i\})f_X(x_k)dx_k=\int_{x_k}Pr(Y_1 Thus $$Pr(X>Y)=\sum_{k=1}^{n_x}\int_{x_k}\Phi(\frac{x_k-\mu_y}{\sigma_y})^{n_y}\Phi(\frac{x_k-\mu_x}{\sigma_x})^{n_x-1}f_X(x_k)dx_k$$, where $$f_X()$$ is the density of $$X_i$$ and will depend on $$\mu_x,\sigma_x$$.
{}
polypath {graphics} R Documentation ## Path Drawing ### Description path draws a path whose vertices are given in x and y. ### Usage polypath(x, y = NULL, border = NULL, col = NA, lty = par("lty"), rule = "winding", ...) ### Arguments x, y vectors containing the coordinates of the vertices of the path. col the color for filling the path. The default, NA, is to leave paths unfilled. border the color to draw the border. The default, NULL, means to use par("fg"). Use border = NA to omit borders. For compatibility with S, border can also be logical, in which case FALSE is equivalent to NA (borders omitted) and TRUE is equivalent to NULL (use the foreground colour), lty the line type to be used, as in par. rule character value specifying the path fill mode: either "winding" or "evenodd". ... graphical parameters such as xpd, lend, ljoin and lmitre can be given as arguments. ### Details The coordinates can be passed in a plotting structure (a list with x and y components), a two-column matrix, .... See xy.coords. It is assumed that the path is to be closed by joining the last point to the first point. The coordinates can contain missing values. The behaviour is similar to that of polygon, except that instead of breaking a polygon into several polygons, NA values break the path into several sub-paths (including closing the last point to the first point in each sub-path). See the examples below. The distinction between a path and a polygon is that the former can contain holes, as interpreted by the fill rule; these fill a region if the path border encircles it an odd or non-zero number of times, respectively. Hatched shading (as implemented for polygon()) is not (currently) supported. Not all graphics devices support this function: for example xfig and pictex do not. ### References Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. Murrell, P. (2005) R Graphics. Chapman & Hall/CRC Press. segments for even more flexibility, lines, rect, box, polygon. par for how to specify colors. ### Examples plotPath <- function(x, y, col = "grey", rule = "winding") { plot.new() plot.window(range(x, na.rm = TRUE), range(y, na.rm = TRUE)) polypath(x, y, col = col, rule = rule) if (!is.na(col)) mtext(paste("Rule:", rule), side = 1, line = 0) } plotRules <- function(x, y, title) { plotPath(x, y) plotPath(x, y, rule = "evenodd") mtext(title, side = 3, line = 0) plotPath(x, y, col = NA) } op <- par(mfrow = c(5, 3), mar = c(2, 1, 1, 1)) plotRules(c(.1, .1, .9, .9, NA, .2, .2, .8, .8), c(.1, .9, .9, .1, NA, .2, .8, .8, .2), "Nested rectangles, both clockwise") plotRules(c(.1, .1, .9, .9, NA, .2, .8, .8, .2), c(.1, .9, .9, .1, NA, .2, .2, .8, .8), "Nested rectangles, outer clockwise, inner anti-clockwise") plotRules(c(.1, .1, .4, .4, NA, .6, .9, .9, .6), c(.1, .4, .4, .1, NA, .6, .6, .9, .9), "Disjoint rectangles") plotRules(c(.1, .1, .6, .6, NA, .4, .4, .9, .9), c(.1, .6, .6, .1, NA, .4, .9, .9, .4), "Overlapping rectangles, both clockwise") plotRules(c(.1, .1, .6, .6, NA, .4, .9, .9, .4), c(.1, .6, .6, .1, NA, .4, .4, .9, .9), "Overlapping rectangles, one clockwise, other anti-clockwise") par(op) [Package graphics version 4.2.0 Index]
{}
# Self-averaging of kinetic models for waves in random media @article{Bal2007SelfaveragingOK, title={Self-averaging of kinetic models for waves in random media}, author={Guillaume Bal and Olivier Pinaud}, journal={Kinetic and Related Models}, year={2007}, volume={1}, pages={85-100} } • Published 25 November 2007 • Physics • Kinetic and Related Models Kinetic equations are often appropriate to model the energy density of high frequency waves propagating in highly heterogeneous media. The limitations of the kinetic model are quantified by the statistical instability of the wave energy density, i.e., by its sensitivity to changes in the realization of the underlying heterogeneous medium modeled as a random medium. In the simplified Ito-Schrodinger regime of wave propagation, we obtain optimal estimates for the statistical… 11 Citations Single scattering estimates for the scintillation function of waves in random media • Physics • 2010 The energy density of high frequency waves propagating in highly oscillatory random media is well approximated by solutions of deterministic kinetic models. The scintillation function determines the Dynamics of Wave Scintillation in Random Media • Mathematics • 2010 This paper concerns the asymptotic structure of the scintillation function in the simplified setting of wave propagation modeled by an Itô–Schrödinger equation. We show that the size of the IMAGING USING TRANSPORT MODELS FOR WAVE–WAVE CORRELATIONS • Physics • 2011 We consider the imaging of objects buried in unknown heterogeneous media. The medium is probed by using classical (e.g. acoustic or electromagnetic) waves. When heterogeneities in the medium become Intensity fluctuations in random waveguides • J. Garnier • Physics Communications in Mathematical Sciences • 2020 An asymptotic analysis of wave propagation in randomly perturbed waveguides is carried out in order to identify the effective Markovian dynamics of the guided mode powers. The main result consists in Inverse problems in random media: a kinetic approach We consider the validity and accuracy of kinetic equations to model the propagation of high frequency waves in highly heterogeneous media. The kinetic models are used to infer the macroscopic Inverse problems in random media: a kinetic approach We consider the validity and accuracy of kinetic equations to model the propagation of high frequency waves in highly heterogeneous media. The kinetic models are used to infer the macroscopic Correlation of ambient noise signals in the radiative transport regime • Physics • 2011 The cross correlation of the wave signals emitted by ambient noise sources can be used to estimate the Green’s function of the wave equation in an inhomogeneous medium. In this paper we clarify the Kinetic limits for waves in a random medium • Mathematics • 2010 2 Diffusive limit for a particle in a random flow 7 2.1 Diffusion of a particle in a time-dependent random flow . . . . . . . . . . . . . . . . 7 2.1.1 The central limit theorem, purely Fluctuations of solutions to Wigner equation with an Ornstein-Uhlenbeck potential • Mathematics • 2012 We consider energy fluctuations for solutions of the Schrodinger equation with an Ornstein-Uhlenbeck random potential when the initial data is spatially localized. The limit of the fluctuations of Time reversal for radiative transport with applications to inverse and control problems In this paper we develop a time reversal method for the radiative transport equation to solve two problems: an inverse problem for the recovery of an initial condition from boundary measurements, and ## References SHOWING 1-10 OF 27 REFERENCES On the Self-Averaging of Wave Energy in Random Media • G. Bal • Mathematics Multiscale Model. Simul. • 2004 It is shown that wave energy is not stable, and instead scintillation is created by the wave dynamics, when the initial energy distribution is sufficiently singular. Kinetic Models for Imaging in Random Media • Physics Multiscale Model. Simul. • 2007 This work quantifies the influence of small objects on (i) the energy density measured at an array of detectors and (ii) the correlation between the wave field measured in the absence of the object and theWave field measuredIn the presence of the objects. Self-Averaging from Lateral Diversity in the Itô-Schrödinger Equation • Physics, Mathematics Multiscale Model. Simul. • 2007 The Wigner transform of the wave field is used and it is shown that it becomes deterministic in the large diversity limit when integrated against test functions and also shows that the limit is deterministic when the support of the test functions tends to zero but is large compared to the correlation length. Parabolic and Gaussian White Noise Approximation for Wave Propagation in Random Media • Mathematics SIAM J. Appl. Math. • 1996 The parabolic or forward scattering approximation has been used extensively in the study of wave propagation and the validity of this approximation is proved for stratified weakly fluctuating random media in the high-frequencies regime. SELF-AVERAGING IN TIME REVERSAL FOR THE PARABOLIC WAVE EQUATION • Mathematics • 2002 We analyze the self-averaging properties of time-reversed solutions of the paraxial wave equation with random coefficients, which we take to be Markovian in the direction of propagation. This allows Self-Averaging of Wigner Transforms in Random Media • Mathematics • 2003 AbstractWe establish the self-averaging properties of the Wigner transform of a mixture of states in the regime when the correlation length of the random medium is much longer than the wave length Kinetic Limit for Wave Propagation in a Random Medium • Mathematics • 2006 We study crystal dynamics in the harmonic approximation. The atomic masses are weakly disordered, in the sense that their deviation from uniformity is of the order $$\sqrt{\epsilon}$$. The dispersion
{}
13 Active contributors today Find general solution of dy/dx+2y/x=2e^(x²)xx(x²+1)/x?if y=1 when x=1,express y in terms of x? Cesareo R. Featured 1 month ago See below. Explanation: This differential equation is linear so it can be represented as the sum $y = {y}_{h} + {y}_{p}$ with $y {'}_{h} + \frac{2}{x} {y}_{h} = 0$ $y {'}_{p} + \frac{2}{x} {y}_{p} = 2 {e}^{{x}^{2}} \left(\frac{{x}^{2} + 1}{x}\right)$ The solution for ${y}_{h}$ is easily determined as ${y}_{h} = {C}_{0} / {x}^{2}$ now supposing that ${y}_{p} = \frac{C \left(x\right)}{x} ^ 2$ and substituting into $y {'}_{p} + \frac{2}{x} {y}_{p} = 2 {e}^{{x}^{2}} \left(\frac{{x}^{2} + 1}{x}\right)$ we get $\frac{2 {e}^{{x}^{2}} \left(1 + {x}^{2}\right)}{x} - \frac{C ' \left(x\right)}{x} ^ 2 = 0$ or $C ' \left(x\right) = {e}^{{x}^{2}} x \left(1 + {x}^{2}\right)$ or $C \left(x\right) = {e}^{{x}^{2}} {x}^{2} + {C}_{1}$ and finally $y = \frac{{e}^{{x}^{2}} {x}^{2} + {C}_{1}}{x} ^ 2 = {C}_{1} / {x}^{2} + {e}^{{x}^{2}}$ Differentiate f(x) = lnabs(x/(1+x^2))? Rhys Featured 3 weeks ago $f ' \left(x\right) = \frac{1}{x} - \frac{2 x}{1 + {x}^{2}}$ Explanation: We must use our log laws: ${\log}_{\alpha} \left(\frac{\beta}{\gamma}\right) \equiv {\log}_{\alpha} \beta - {\log}_{\alpha} \gamma$ $\implies \ln | \frac{x}{1 + {x}^{2}} | = \ln | x | - \ln | 1 + {x}^{2} |$ Now applying our knowledge of differentiating logs: $\frac{d}{\mathrm{dx}} \ln \left(f \left(x\right)\right) = \frac{f ' \left(x\right)}{f} \left(x\right)$ By using the chain rule... $\frac{d}{\mathrm{dx}} \left(\ln | x | - \ln | 1 + {x}^{2} |\right) = \frac{\frac{d}{\mathrm{dx}} \left(x\right)}{x} - \frac{\frac{d}{\mathrm{dx}} \left(1 + {x}^{2}\right)}{1 + {x}^{2}}$ color(blue)(= 1/x - (2x)/(1+x^2) Let I=int_0^1 9/(3+x^2)^2\ dx. Using the substitution x=sqrt(3) tan(theta), show that I= sqrt(3) int_0^(pi/6) cos^2(theta)\ d theta. What is the exact value of I? NickTheTurtle Featured 3 weeks ago ${\int}_{0}^{1} \frac{9}{3 + {x}^{2}} ^ 2 \setminus \mathrm{dx} = \frac{\left(2 \pi + 3\right) \sqrt{3}}{24}$ Explanation: $\setminus \setminus \setminus \setminus \setminus \setminus {\int}_{0}^{1} \frac{9}{3 + {x}^{2}} ^ 2 \setminus \mathrm{dx}$ Substitute $x = \sqrt{3} \tan \left(\theta\right)$ and $\mathrm{dx} = \sqrt{3} {\sec}^{2} \left(\theta\right) d \setminus \theta$: $= {\int}_{\arctan} {\left(0\right)}^{\arctan} \left(\frac{1}{\sqrt{3}}\right) \frac{9}{3 + {\left(\sqrt{3} \tan \left(\theta\right)\right)}^{2}} ^ 2 \sqrt{3} {\sec}^{2} \left(\theta\right) \setminus d \setminus \theta$ $= \sqrt{3} {\int}_{0}^{\frac{\pi}{6}} \frac{9 {\sec}^{2} \left(\theta\right)}{3 + 3 {\tan}^{2} \left(\theta\right)} ^ 2 \setminus d \setminus \theta$ $= \sqrt{3} {\int}_{0}^{\frac{\pi}{6}} {\sec}^{2} \frac{\theta}{1 + {\tan}^{2} \left(\theta\right)} ^ 2 \setminus d \setminus \theta$ Using the fact that ${\tan}^{2} \left(\theta\right) + 1 = {\sec}^{2} \left(\theta\right)$: $= \sqrt{3} {\int}_{0}^{\frac{\pi}{6}} {\sec}^{2} \frac{\theta}{\sec} ^ 4 \left(\theta\right) \setminus d \setminus \theta$ $= \sqrt{3} {\int}_{0}^{\frac{\pi}{6}} \frac{1}{\sec} ^ 2 \left(\theta\right) \setminus d \setminus \theta$ Since $\sec \left(\theta\right) = \frac{1}{\cos} \left(\theta\right)$, we have $= \sqrt{3} {\int}_{0}^{\frac{\pi}{6}} {\cos}^{2} \left(\theta\right) \setminus d \setminus \theta$ Using the fact that ${\cos}^{2} \left(\theta\right) = \frac{1 + \cos \left(2 \theta\right)}{2}$: $= \frac{\sqrt{3}}{2} {\int}_{0}^{\frac{\pi}{6}} 1 + \cos \left(2 \theta\right) \setminus d \setminus \theta$ $= \frac{\sqrt{3}}{2} {\left[\theta + \sin \frac{2 \theta}{2}\right]}_{0}^{\frac{\pi}{6}}$ $= \frac{\sqrt{3}}{2} \left(\frac{\pi}{6} + \frac{1}{4}\right)$ $= \frac{\left(2 \pi + 3\right) \sqrt{3}}{24}$ A curve is such that dy/dx=4/sqrt((6-2x)) and P(1,8) is a point on the curve. 1. The normal to the curve at the point P meets the coordinate axes at Q and at R. Find the coordinates of the mid-point of QR. 2. Find the equation of the curve? Featured 3 weeks ago The coordinates of the midpoint of $Q R$ is $= \left(\frac{17}{2} , \frac{17}{4}\right)$. The equation of the curve is y=-4(6-2x))^(1/2)+16 Explanation: The gradient to the curve $f \left(x\right)$ is $\frac{\mathrm{dy}}{\mathrm{dx}} = f ' \left(x\right) = \frac{4}{\sqrt{6 - 2 x}}$ At the point $P \left(1 , 8\right)$, the gradient is $f ' \left(1\right) = \frac{4}{\sqrt{6 - 2 \cdot 1}} = \frac{4}{\sqrt{4}} = 2$ The slope of the tangent at the point $P$ is $m = 2$ Therefore, The slope of the normal at the point $P$ $= m ' = - \frac{1}{m} = - \frac{1}{2}$ The equation of the normal at the point $P$ is $y - 8 = - \frac{1}{2} \left(x - 1\right)$ $2 y - 16 = - x + 1$ $2 y + x = 17$ When $x = 0$, $\implies$, $y = \frac{17}{2}$ The point $Q = \left(0 , \frac{17}{2}\right)$ When $y = 0$, $\implies$, $x = 17$ The point $R = \left(17 , 0\right)$ The midpoint of $Q R$ is $= \left(\frac{17 + 0}{2} , \frac{\frac{17}{2} + 0}{2}\right)$ $= \left(\frac{17}{2} , \frac{17}{4}\right)$ The equation of the curve is obtained by integrating the derivative, or by solving the differential equation $y = \int \frac{4 \mathrm{dx}}{\sqrt{6 - 2 x}}$ $= 4 \int {\left(6 - 2 x\right)}^{- \frac{1}{2}}$ $= 4 \cdot - \frac{1}{2} \cdot \frac{2}{1} \cdot {\left(6 - 2 x\right)}^{\frac{1}{2}} + C$ y=-4(6-2x))^(1/2)+C Plugging in the values of $P \left(1 , 8\right)$ $8 = - 4 {\left(6 - 2\right)}^{\frac{1}{2}} + C$ $C = 16$ The equation of the curve is y=-4(6-2x))^(1/2)+16 graph{(2y+x-17)(y+4sqrt(6-2x)-16)=0 [-22.17, 23.44, -4.57, 18.27]} Limit when x tends to 0? Andrea S. Featured 1 week ago ${\lim}_{x \to 0} {\left({e}^{x} + 3 x\right)}^{\frac{1}{x}} = {e}^{4}$ Explanation: Write the function as: ${\left({e}^{x} + 3 x\right)}^{\frac{1}{x}} = {\left({e}^{\ln \left({e}^{x} + 3 x\right)}\right)}^{\frac{1}{x}} = {e}^{\ln \frac{{e}^{x} + 3 x}{x}}$ Consider now the limit: ${\lim}_{x \to 0} \ln \frac{{e}^{x} + 3 x}{x}$ It is in the indeterminate form $\frac{0}{0}$ so we can use l'Hospital's rule: ${\lim}_{x \to 0} \ln \frac{{e}^{x} + 3 x}{x} = {\lim}_{x \to 0} \frac{\frac{d}{\mathrm{dx}} \ln \left({e}^{x} + 3 x\right)}{\frac{d}{\mathrm{dx}} x}$ ${\lim}_{x \to 0} \ln \frac{{e}^{x} + 3 x}{x} = {\lim}_{x \to 0} \frac{{e}^{x} + 3}{{e}^{x} + 3 x} = 4$ As the limit is finite and the function ${e}^{x}$ is continuous for $x \in \mathbb{R}$ we have: ${\lim}_{x \to 0} {e}^{\ln \frac{{e}^{x} + 3 x}{x}} = {e}^{\left({\lim}_{x \to 0} \ln \frac{{e}^{x} + 3 x}{x}\right)} = {e}^{4}$ What is the significance of partial derivative? Give an example and help me to understand in brief. Cesareo R. Featured yesterday See below. Explanation: I hope it helps. The partial derivative is intrinsically associated to the total variation. Suppose we have a function $f \left(x , y\right)$ and we want to know how much it varies when we introduce an increment to each variable. Fixing ideas, making $f \left(x , y\right) = k x y$ we want to know how much it is $\mathrm{df} \left(x , y\right) = f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x , y\right)$ In our function-example we have $f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) = k \left(x + \mathrm{dx}\right) \left(y + \mathrm{dy}\right) = k x y + k x \mathrm{dx} + k y \mathrm{dy} + k \mathrm{dx} \mathrm{dy}$ and then $\mathrm{df} \left(x , y\right) = k x y + k x \mathrm{dx} + k y \mathrm{dy} + k \mathrm{dx} \mathrm{dy} - k x y = k x \mathrm{dx} + k y \mathrm{dy} + k \mathrm{dx} \mathrm{dy}$ Choosing $\mathrm{dx} , \mathrm{dy}$ arbitrarily small then $\mathrm{dx} \mathrm{dy} \approx 0$ and then $\mathrm{df} \left(x , y\right) = k x \mathrm{dx} + k y \mathrm{dy}$ but generally $\mathrm{df} \left(x , y\right) = f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x , y\right) = \frac{1}{2} \left(2 f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - 2 f \left(x , y\right) + f \left(x + \mathrm{dx} , y\right) - f \left(x + \mathrm{dx} , y\right) + f \left(x , y + \mathrm{dy}\right) - f \left(x , y + \mathrm{dy}\right)\right) =$ $= \frac{1}{2} \frac{f \left(x + \mathrm{dx} , y\right) - f \left(x , y\right)}{\mathrm{dx}} \mathrm{dx} + \frac{1}{2} \frac{f \left(x , y + \mathrm{dy}\right) - f \left(x , y\right)}{\mathrm{dy}} \mathrm{dy} +$ $+ \frac{1}{2} \frac{f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x , y + \mathrm{dy}\right)}{\mathrm{dx}} \mathrm{dx} + \frac{1}{2} \frac{f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x + \mathrm{dx} , y\right)}{\mathrm{dy}} \mathrm{dy}$ now making $\mathrm{dx} , \mathrm{dy}$ arbitrarily small we have $\mathrm{df} \left(x , y\right) = \frac{1}{2} \left(2 {f}_{x} \left(x , y\right) \mathrm{dx} + 2 {f}_{y} \left(x , y\right) \mathrm{dy}\right) = {f}_{x} \left(x , y\right) \mathrm{dx} + {f}_{y} \left(x , y\right) \mathrm{dy}$ so we can compute the total variation for a given function, by calculating the partial derivatives ${f}_{{x}_{1}} , {f}_{{x}_{2}} , \cdots , {f}_{{x}_{n}}$ and compounding $\mathrm{df} \left({x}_{1} , {x}_{2} , \cdots , {x}_{n}\right) = {f}_{{x}_{1}} {\mathrm{dx}}_{1} + \cdots + {f}_{{x}_{n}} {\mathrm{dx}}_{n}$ Here, the quantities ${f}_{{x}_{i}}$ are called partial derivatives and can also be represented as $\frac{\partial f}{\partial {x}_{i}}$ In our example ${f}_{x} = \frac{\partial f}{\partial x} = k x$ and ${f}_{y} = \frac{\partial f}{\partial y} = k y$ NOTE ${f}_{x} \left(x , y\right) = {\lim}_{\begin{matrix}\mathrm{dx} \to 0 \\ \mathrm{dy} \to 0\end{matrix}} \frac{f \left(x + \mathrm{dx} , y\right) - f \left(x , y\right)}{\mathrm{dx}} = {\lim}_{\begin{matrix}\mathrm{dx} \to 0 \\ \mathrm{dy} \to 0\end{matrix}} \frac{f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x , y\right)}{\mathrm{dx}}$ ${f}_{y} \left(x , y\right) = {\lim}_{\begin{matrix}\mathrm{dx} \to 0 \\ \mathrm{dy} \to 0\end{matrix}} \frac{f \left(x , y + \mathrm{dy}\right) - f \left(x , y\right)}{\mathrm{dy}} = {\lim}_{\begin{matrix}\mathrm{dx} \to 0 \\ \mathrm{dy} \to 0\end{matrix}} \frac{f \left(x + \mathrm{dx} , y + \mathrm{dy}\right) - f \left(x , y\right)}{\mathrm{dy}}$ Questions • · 9 hours ago • · 11 hours ago • · 12 hours ago • · 12 hours ago • · 13 hours ago • · 14 hours ago • · 14 hours ago • · 15 hours ago • · 16 hours ago • · 16 hours ago • · 17 hours ago • · 17 hours ago • · 17 hours ago • · 18 hours ago • · 18 hours ago
{}
# Rook polynomial (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Jump to navigation Jump to search In combinatorial mathematics, a rook polynomial is a generating polynomial of the number of ways to place non-attacking rooks on a board that looks like a checkerboard; that is, no two rooks may be in the same row or column. The board is any subset of the squares of a rectangular board with m rows and n columns; we think of it as the squares in which one is allowed to put a rook. The board is the ordinary chessboard if all squares are allowed and m = n = 8 and a chessboard of any size if all squares are allowed and m = n. The coefficient of x k in the rook polynomial RB(x) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B. The rooks are arranged in such a way that there is no pair of rooks in the same row or column. In this sense, an arrangement is the positioning of rooks on a static, immovable board; the arrangement will not be different if the board is rotated or reflected while keeping the squares stationary. The polynomial also remains the same if rows are interchanged or columns are interchanged. The term "rook polynomial" was coined by John Riordan.[1] Despite the name's derivation from chess, the impetus for studying rook polynomials is their connection with counting permutations (or partial permutations) with restricted positions. A board B that is a subset of the n × n chessboard corresponds to permutations of n objects, which we may take to be the numbers 1, 2, ..., n, such that the number aj in the j-th position in the permutation must be the column number of an allowed square in row j of B. Famous examples include the number of ways to place n non-attacking rooks on: • an entire n × n chessboard, which is an elementary combinatorial problem; • the same board with its diagonal squares forbidden; this is the derangement or "hat-check" problem; • the same board without the squares on its diagonal and immediately above its diagonal (and without the bottom left square), which is essential in the solution of the problème des ménages. Interest in rook placements arises in pure and applied combinatorics, group theory, number theory, and statistical physics. The particular value of rook polynomials comes from the utility of the generating function approach, and also from the fact that the zeroes of the rook polynomial of a board provide valuable information about its coefficients, i.e., the number of non-attacking placements of k rooks. ## Definition The rook polynomial RB(x) of a board B is the generating function for the numbers of arrangements of non-attacking rooks: ${\displaystyle R_{B}(x)=\sum _{k=0}^{\infty }r_{k}(B)x^{k}}$ where rk is the number of ways to place k non-attacking rooks on the board. Despite the notation, this is a finite sum, since the board is finite so there is a maximum number of non-attacking rooks it can hold; indeed, there cannot be more rooks than the smaller of the number of rows and columns in the board. ### Complete boards The first few rook polynomials on square n × n boards are (with Rn = RB): {\displaystyle {\begin{aligned}R_{1}(x)&=x+1\\R_{2}(x)&=2x^{2}+4x+1\\R_{3}(x)&=6x^{3}+18x^{2}+9x+1\\R_{4}(x)&=24x^{4}+96x^{3}+72x^{2}+16x+1.\end{aligned}}} In words, this means that on a 1 × 1 board, 1 rook can be arranged in 1 way, and zero rooks can also be arranged in 1 way (empty board); on a complete 2 × 2 board, 2 rooks can be arranged in 2 ways (on the diagonals), 1 rook can be arranged in 4 ways, and zero rooks can be arranged in 1 way; and so forth for larger boards. For complete m × n rectangular boards Bm,n we write Rm,n := RBm,n . The smaller of m and n can be taken as an upper limit for k, since obviously rk = 0 if k > min(m, n). This is also shown in the formula for Rm,n(x). The rook polynomial of a rectangular chessboard is closely related to the generalized Laguerre polynomial Lnα(x) by the identity ${\displaystyle R_{m,n}(x)=n!x^{n}L_{n}^{(m-n)}(-x^{-1}).}$ ## Matching polynomials A rook polynomial is a special case of one kind of matching polynomial, which is the generating function of the number of k-edge matchings in a graph. The rook polynomial Rm,n(x) corresponds to the complete bipartite graph Km,n . The rook polynomial of a general board BBm,n corresponds to the bipartite graph with left vertices v1, v2, ..., vm and right vertices w1, w2, ..., wn and an edge viwj whenever the square (ij) is allowed, i.e., belongs to B. Thus, the theory of rook polynomials is, in a sense, contained in that of matching polynomials. We deduce an important fact about the coefficients rk, which we recall given the number of non-attacking placements of k rooks in B: these numbers are unimodal, i.e., they increase to a maximum and then decrease. This follows (by a standard argument) from the theorem of Heilmann and Lieb[2] about the zeroes of a matching polynomial (a different one from that which corresponds to a rook polynomial, but equivalent to it under a change of variables), which implies that all the zeroes of a rook polynomial are negative real numbers. ## Connection to matrix permanents For incomplete square n × n boards, (i.e. rooks are not allowed to be played on some arbitrary subset of the board's squares) computing the number of ways to place n rooks on the board is equivalent to computing the permanent of a 0–1 matrix. ## Complete rectangular boards ### Rooks problems {{#invoke:Chessboard|board}} A precursor to the rook polynomial is the classic "Eight rooks problem" by H. E. Dudeney[3] in which he shows that the maximum number of non-attacking rooks on a chessboard is eight by placing them on one of the main diagonals (Fig. 1). The question asked is: "In how many ways can eight rooks be placed on an 8 × 8 chessboard so that neither of them attacks the other?" The answer is: "Obviously there must be a rook in every row and every column. Starting with the bottom row, it is clear that the first rook can be put on any one of eight different squares (Fig. 1). Wherever it is placed, there is the option of seven squares for the second rook in the second row. Then there are six squares from which to select the third row, five in the fourth, and so on. Therefore the number of different ways must be 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 40,320" (that is, 8!, where "!" is factorial).[4] The same result can be obtained in a slightly different way. Let us endow each rook with a positional number, corresponding to the number of its rank, and assign it a name that corresponds to the name of its file. Thus, rook a1 has position 1 and name "a", rook b2 has position 2 and name "b", etc. Then let us order the rooks into an ordered list (sequence) by their positions. The diagram on Fig. 1 will then transform in the sequence (a,b,c,d,e,f,g,h). Placing any rook on another file would involve moving the rook that hitherto occupied the second file to the file, vacated by the first rook. For instance, if rook a1 is moved to "b" file, rook b2 must be moved to "a" file, and now they will become rook b1 and rook a2. The new sequence will become (b,a,c,d,e,f,g,h). In combinatorics, this operation is termed permutation, and the sequences, obtained as a result of the permutation, are permutations of the given sequence. The total number of permutations, containing 8 elements from a sequence of 8 elements is 8! (factorial of 8). To assess the effect of the imposed limitation "rooks must not attack each other", let us consider the problem without such limitation. In how many ways can eight rooks be placed on an 8 × 8 chessboard? This will be the total number of combinations of 8 rooks on 64 squares: ${\displaystyle {64 \choose 8}={\frac {64!}{8!(64-8)!}}=4,426,165,368.}$ Thus, the limitation "rooks must not attack each other" reduces the total number of allowable positions from combinations to permutations which is a factor of about 109,776. A number of problems from different spheres of human activity can be reduced to the rook problem by giving them a "rook formulation". As an example: A company must employ n workers on n different jobs and each job must be carried out only by one worker. In how many ways can this appointment be done? Let us put the workers on the ranks of the n × n chessboard, and the jobs − on the files. If worker i is appointed to job j, a rook is placed on the square where rank i crosses file j. Since each job is carried out only by one worker and each worker is appointed to only one job, all files and ranks will contain only one rook as a result of the arrangement of n rooks on the board, that is, the rooks do not attack each other. ### The rook polynomial as a generalization of the rooks problem The classical rooks problem immediately gives the value of r8, the coefficient in front of the highest order term of the rook polynomial. Indeed, its result is that 8 non-attacking rooks can be arranged on an 8 × 8 chessboard in r8 = 8! = 40320 ways. Let us generalize this problem by considering an m × n board, that is, a board with m ranks (rows) and n files (columns). The problem becomes: In how many ways can one arrange k rooks on an m × n board in such a way that they do not attack each other? It is clear that for the problem to be solvable, k must be less or equal to the smaller of the numbers m and n; otherwise one cannot avoid placing a pair of rooks on a rank or on a file. Let this condition be fulfilled. Then the arrangement of rooks can be carried out in two steps. First, choose the set of k ranks on which to place the rooks. Since the number of ranks is m, of which k must be chosen, this choice can be done in ${\displaystyle {\binom {m}{k}}}$ ways. Similarly, the set of k files on which to place the rooks can be chosen in ${\displaystyle {\binom {n}{k}}}$ ways. Because the choice of files does not depend on the choice of ranks, according to the products rule there are ${\displaystyle {\binom {m}{k}}{\binom {n}{k}}}$ ways to choose the square on which to place the rook. However, the task is not yet finished because k ranks and k files intersect in k2 squares. By deleting unused ranks and files and compacting the remaining ranks and files together, one obtains a new board of k ranks and k files. It was already shown that on such board k rooks can be arranged in k! ways (so that they do not attack each other). Therefore, the total number of possible non-attacking rooks arrangements is:[5] ${\displaystyle r_{k}={\binom {m}{k}}{\binom {n}{k}}k!={\frac {n!m!}{k!(n-k)!(m-k)!}}.}$ For instance, 3 rooks can be placed on a conventional chessboard (8 × 8) in ${\displaystyle \textstyle {\frac {8!8!}{3!5!5!}}=18,816}$ ways. For k = m = n, the above formula gives rk = n! that corresponds to the result obtained for the classical rooks problem. The rook polynomial with explicit coefficients is now: ${\displaystyle R_{m,n}(x)=\sum _{k=0}^{\min(m,n)}{\binom {m}{k}}{\binom {n}{k}}k!x^{k}=\sum _{k=0}^{\min(m,n)}{\frac {n!m!}{k!(n-k)!(m-k)!}}x^{k}.}$ If the limitation "rooks must not attack each other" is removed, one must choose any k squares from m × n squares. This can be done in: ${\displaystyle {\binom {mn}{k}}={\frac {(mn)!}{k!(mn-k)!}}}$ ways. If the k rooks differ in some way from each other, e.g., they are labelled or numbered, all the results obtained so far must be multiplied by k!, the number of permutations of k rooks. ### Symmetric arrangements As a further complication to the rooks problem, let us require that rooks not only be non-attacking but also symmetrically arranged on the board. Depending on the type of symmetry, this is equivalent to rotating or reflecting the board. Symmetric arrangements lead to many problems, depending on the symmetry condition.[6][7][8][9] {{#invoke:Chessboard|board}} The simplest of those arrangements is when rooks are symmetric about the centre of the board. Let us designate with Gn the number of arrangements in which n rooks are placed on a board with n ranks and n files. Now let us make the board to contain 2n ranks and 2n files. A rook on the first file can be placed on any of the 2n squares of that file. According to the symmetry condition, placement of this rook defines the placement of the rook that stands on the last file − it must be arranged symmetrically to the first rook about the board centre. Let us remove the first and the last files and the ranks that are occupied by rooks (since the number of ranks is even, the removed rooks cannot stand on the same rank). This will give a board of 2n − 2 files and 2n − 2 ranks. It is clear that to each symmetric arrangement of rooks on the new board corresponds a symmetric arrangement of rooks on the original board. Therefore, G2n = 2nG2n − 2 (the factor 2n in this expression comes from the possibility for the first rook to occupy any of the 2n squares on the first file). By iterating the above formula one reaches to the case of a 2 × 2 board, on which there are 2 symmetric arrangements (on the diagonals). As a result of this iteration, the final expression is G2n = 2nn! For the usual chessboard (8 × 8), G8 = 24 × 4! = 16 × 24 = 384 centrally symmetric arrangements of 8 rooks. One such arrangement is shown in Fig. 2. For odd-sized boards (containing 2n + 1 ranks and 2n + 1 files) there is always a square that does not have its symmetric double − this is the central square of the board. There must always be a rook placed on this square. Removing the central file and rank, one obtains a symmetric arrangement of 2n rooks on an 2n × 2n board. Therefore, for such board, once again G2n + 1 = G2n = 2nn! A little more complicated problem is to find the number of non-attacking arrangements that do not change upon 90° rotation of the board. Let the board has 4n files and 4n ranks, and the number of rooks is also 4n. In this case, the rook that is on the first file can occupy any square on this file, except the corner squares (a rook cannot be on a corner square because after a 90° rotation there would 2 rooks that attack each other). There are another 3 rooks that correspond to that rook and they stand, respectively, on the last rank, the last file, and the first rank (they are obtained from the first rook by 90°, 180°, and 270° rotations). Removing the files and ranks of those rooks, one obtains the rook arrangements for a (4n − 4) × (4n − 4) board with the required symmetry. Thus, the following recurrence relation is obtained: R4n = (4n − 2)R4n − 4, where Rn is the number of arrangements for a n × n board. Iterating, it follows that R4n = 2n(2n − 1)(2n − 3)...1. The number of arrangements for a (4n + 1) × (4n + 1) board is the same as that for a 4n × 4n board; this is because on a (4n + 1) × (4n + 1) board, one rook must necessarily stand in the centre and thus the central rank and file can be removed. Therefore R4n + 1 = R4n. For the traditional chessboard (n = 2), R8 = 4 × 3 × 1 = 12 possible arrangements with rotational symmetry. For (4n + 2) × (4n + 2) and (4n + 3) × (4n + 3) boards, the number of solutions is zero. Two cases are possible for each rook: either it stands in the centre or it doesn't stand in the centre. In the second case, this rook is included in the rook quartet that exchanges squares on turning the board at 90°. Therefore, the total number of rooks must be either 4n (when there is no central square on the board) or 4n + 1. This proves that R4n + 2 = R4n + 3 = 0. The number of arrangements of n non-attacking rooks symmetric to one of the diagonals (for determinacy, the diagonal corresponding to a1–h8 on the chessboard) on a n × n board is given by the telephone numbers defined by the recurrence Qn = Qn − 1 + (n − 1)Qn − 2. This recurrence is derived in the following way. Note that the rook on the first file either stands on the bottom corner square or it stands on another square. In the first case, removal of the first file and the first rank leads to the symmetric arrangement n − 1 rooks on a (n − 1) × (n − 1) board. The number of such arrangements is Qn − 1. In the second case, for the original rook there is another rook, symmetric to the first one about the chosen diagonal. Removing the files and ranks of those rooks leads to a symmetric arrangement n − 2 rooks on a (n − 2) × (n − 2) board. Since the number of such arrangements is Qn − 2 and the rook can be put on the n − 1 square of the first file, there are (n − 1)Qn − 2 ways for doing this, which immediately gives the above recurrence. The number of diagonal-symmetric arrangements is then given by the expression: ${\displaystyle Q_{n}=1+{\binom {n}{2}}+{\frac {1}{1\times 2}}{\binom {n}{2}}{\binom {n-2}{2}}+{\frac {1}{1\times 2\times 3}}{\binom {n}{2}}{\binom {n-2}{2}}{\binom {n-4}{2}}+\cdots .}$ This expression is derived by partitioning all rook arrangements in classes; in class s are those arrangements in which s pairs of rooks do not stand on the diagonal. In exactly the same way, it can be shown that the number of n-rook arrangements on a n × n board, such that they do not attack each other and are symmetric to both diagonals is given by the recurrence equations B2n = 2B2n − 2 + (2n − 2)B2n − 4 and B2n + 1 = B2n. ### Arrangements counted by symmetry classes A different type of generalization is that in which rook arrangements that are obtained from each other by symmetries of the board are counted as one. For instance, if rotating the board by 90 degrees is allowed as a symmetry, then any arrangement obtained by a rotation of 90, 180, or 270 degrees is considered to be "the same" as the original pattern, even though these arrangements are counted separately in the original problem where the board is fixed. For such problems, Dudeney[10] observes: "How many ways there are if mere reversals and reflections are not counted as different has not yet been determined; it is a difficult problem." ## References 1. John Riordan, An Introduction to Combinatorial Analysis, Princeton University Press, 1980 (originally published by John Wiley and Sons, New York; Chapman and Hall, London, 1958) ISBN 978-0-691-02365-6 (reprinted again in 2002, by Dover Publications). See chapters 7 & 8. 2. Ole J. Heilmann and Elliott H. Lieb, Theory of monomer-dimer systems. Communications in Mathematical Physics, Vol. 25 (1972), pp. 190–232. 3. Dudeney, Henry E. Amusements In Mathematics. 1917. Nelson. (republished by Plain Label Books: ISBN 1-60303-152-9, also as a collection of newspaper clippings, Dover Publications, 1958; Kessinger Publishing, 2006). The book can be freely downloaded from Project Gutenberg site [1] 4. Dudeney, Problem 295 5. Vilenkin, Naum Ya. Combinatorics (Kombinatorika). 1969. Nauka Publishers, Moscow (In Russian). 6. Vilenkin, Naum Ya. Popular Combinatorics (Populyarnaya kombinatorika). 1975. Nauka Publishers, Moscow (In Russian). 7. Gik, Evgeny Ya. Mathematics on the Chessboard (Matematika na shakhmatnoy doske). 1976. Nauka Publishers, Moscow (In Russian). 8. Gik, Evgeny Ya. Chess and Mathematics (Shakhmaty i matematika). 1983. Nauka Publishers, Moscow (In Russian). ISBN 3-87144-987-3 (GVK-Gemeinsamer Verbundkatalog) 9. Kokhas', Konstantin P. Rook Numbers and Polynomials (Ladeynye chisla i mnogochleny). MCNMO, Moscow, 2003 (in Russian). ISBN 5-94057-114-X (GVK-Gemeinsamer Verbundkatalog) 10. Dudeney, Answer to Problem 295
{}
My bibliography  Save this paper # Random matrix approach to the dynamics of stock inventory variations Listed: • W. -X. Zhou (ECUST) • G. -H. Mu (ECUST) • J. Kert'esz (BME) ## Abstract We study the cross-correlation matrix $C_{ij}$ of inventory variations of the most active individual and institutional investors in an emerging market to understand the dynamics of inventory variations. We find that the distribution of cross-correlation coefficient $C_{ij}$ has a power-law form in the bulk followed by exponential tails and there are more positive coefficients than negative ones. In addition, it is more possible that two individuals or two institutions have stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues ($\lambda_1$ and $\lambda_2$) of the correlation matrix cannot be explained by the random matrix theory and the projection of inventory variations on the first eigenvector $u(\lambda_1)$ are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients $C_{VR}$ between inventory variations and stock returns. Half individuals are reversing investors who exhibit evident buy and sell herding behaviors, while 6% individuals are trending investors. For institutions, only 10% and 8% investors are trending and reversing investors. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Comparing with the case of Spanish market, Chinese investors exhibit common and market-specific behaviors. Our empirical findings have scientific significance in the understanding of investors' trading behaviors and in the construction of agent-based models for stock markets. ## Suggested Citation • W. -X. Zhou & G. -H. Mu & J. Kert'esz, 2012. "Random matrix approach to the dynamics of stock inventory variations," Papers 1201.0433, arXiv.org. • Handle: RePEc:arx:papers:1201.0433 as File URL: http://arxiv.org/pdf/1201.0433 ## References listed on IDEAS as 1. Andrews, Donald W.K. & Cheng, Xu, 2013. "Maximum likelihood estimation and uniform inference with sporadic identification failure," Journal of Econometrics, Elsevier, vol. 173(1), pages 36-56. 2. A. Belloni & D. Chen & V. Chernozhukov & C. Hansen, 2012. "Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain," Econometrica, Econometric Society, vol. 80(6), pages 2369-2429, November. 3. Eric Gautier & Alexandre Tsybakov, 2011. "High-Dimensional Instrumental Variables Regression and Confidence Sets," Working Papers 2011-13, Center for Research in Economics and Statistics. 4. Koenker, Roger, 1988. "Asymptotic Theory and Econometric Practice," Journal of Applied Econometrics, John Wiley & Sons, Ltd., vol. 3(2), pages 139-147, April. 5. Keisuke Hirano & Guido W. Imbens & Geert Ridder, 2003. "Efficient Estimation of Average Treatment Effects Using the Estimated Propensity Score," Econometrica, Econometric Society, vol. 71(4), pages 1161-1189, July. 6. Jinyong Hahn, 1998. "On the Role of the Propensity Score in Efficient Semiparametric Estimation of Average Treatment Effects," Econometrica, Econometric Society, vol. 66(2), pages 315-332, March. 7. Christopher L. Foote & Christopher F. Goetz, 2008. "The Impact of Legalized Abortion on Crime: Comment," The Quarterly Journal of Economics, Oxford University Press, vol. 123(1), pages 407-423. 8. Donald W.K. Andrews & Xu Cheng & Patrik Guggenberger, 2011. "Generic Results for Establishing the Asymptotic Size of Confidence Sets and Tests," Cowles Foundation Discussion Papers 1813, Cowles Foundation for Research in Economics, Yale University. 9. MacKinnon, James G. & White, Halbert, 1985. "Some heteroskedasticity-consistent covariance matrix estimators with improved finite sample properties," Journal of Econometrics, Elsevier, vol. 29(3), pages 305-325, September. Full references (including those not matched with items on IDEAS) ### NEP fields This paper has been announced in the following NEP Reports: ## Corrections All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:1201.0433. See general information about how to correct material in RePEc. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (arXiv administrators). General contact details of provider: http://arxiv.org/ . If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about. We have no references for this item. You can help adding them by using this form . If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. Please note that corrections may take a couple of weeks to filter through the various RePEc services. IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
{}
Camphor Formula (C10H16O): Properties, Extraction, Side Effects • Written By Ankita Sahay • Written By Ankita Sahay # Camphor Formula: Structure, Properties, Uses Camphor Formula: Camphor or Cinnamomum camphora, is an organic compound widely used for both medicinal properties. It is an aromatic, unsaturated, volatile ketone that belongs to the ‘Terpene’ group of organic compounds. The molecular formula of camphor is $${{\rm{C}}_{10}}{{\rm{H}}_{16}}{\rm{O}}$$. Its IUPAC name is $${\rm{1,}}\,{\rm{7,}}\,{\rm{7}}$$-Trimethylbicyclo $${\rm{[2}}{\rm{.2}}{\rm{.1]}}$$ heptan-$$2$$-one. Camphor is available in nature from the bark and leaves of a tree Cinnamomum camphora, native to China, Japan, Korea, Vietnam, and Taiwan. Camphor oil or Turpentine oil is extracted and processed through steam distillation. Camphor is also synthetically produced by a sequence of chemical reactions. Because of Camphor’s anti-bacterial, anti-fungal, and anti-inflammatory properties, it relieves pain and treats skin ailments. In this article, we will learn about the entire chemistry of Camphor and its importance in our daily life. ## Camphor Formula and Structure Camphor is a cyclic monoterpene ketone bearing “Bornane,” a fundamental terpenoid. The chemical formula of camphor is $${{\rm{C}}_{10}}{{\rm{H}}_{16}}{\rm{O}}$$. Its IUPAC name is $${\rm{1,}}\,{\rm{7,}}\,{\rm{7}}$$-Trimethylbicyclo $${\rm{[2}}{\rm{.2}}{\rm{.1]}}$$ heptan-$$2$$-one. Other names of camphor are $$2$$-Camphanone; $$2$$-Dehydrocamphor; $$2$$-Keto-$${\rm{1,}}\,{\rm{7,}}\,{\rm{7}}$$-trimethylnolcamphane; $${\rm{d}}/{\rm{l}} – 2$$-Bornanone. It is an optically active compound, as it exists as an enantiomeric mixture having both $${\rm{‘d’}}$$ and $${\rm{‘l’}}$$ forms where $${\rm{‘d’}}$$ stands for ‘dextro-rotatory’ compound and $${\rm{‘l’}}$$ stands for ‘leavo-rotatory’ compound. Camphor was one of the first plant metabolites isolated from the woods of camphor trees in the $${18^{{\rm{th}}}}$$ century. Later, many synthetic methods were also introduced to prepare camphor. It is a volatile compound, thus can be extracted by sublimation. It is a highly flammable, waxy compound. Learn Exam Concepts on Embibe ### Extraction of Camphor Oil Camphor is a waxy, crystalline solid with a strong aromatic scent. It is also used as an oil due to its aroma. Camphor oil is extracted by steam distillation from camphor tree woodchips, roots, and branches (Cinnamomum camphora). After distillation, the collected extract is rectified under vacuum and filter pressed. ### Uses of Camphor Due to its sublimating property and aromatic fragrance, camphor is widely used in different fields. Let us discuss its uses one by one: #### 1. Uses of Camphor as Medicines Camphor has lots of medicinal values: 1. Anti-Inflammatory and Analgesic: It has anti-inflammatory properties and is a core ingredient in balms, vapour rubs, etc. It is used to relieve irritation, itching, and pain to reduce inflammatory conditions and chest congestion. Once applied in the form of an ointment, it gets absorbed in the skin epidermis, and we feel a cool or warm sensation where it stimulates the nerve endings that induces slight local analgesia and gives relief from pain. 2. Camphor is often used as an aerosol. We use vapour rub, typically by steam inhalation or nebuliser treatment, to cure coughing and relieve the upper respiratory tract congestion because of the common cold or bronchitis. 3. Camphor is also used as a respiratory stimulant for horses to treat breathing difficulties. 4. Over centuries, the camphor compound has been used in traditional medicine mainly as a decongestant and treats swellings, inflammation, and sprains. #### 2. As an Anti-microbial Agent Camphor is used to fight against many pathogenic microorganisms. It has antiviral, antibacterial properties that help treat diseases caused by them. Due to its antimicrobial activity, camphor oil was also one of the main ingredients in preserving dead human bodies by early Egyptians for mummification. #### 3. As an Insecticide 1. Camphor is one of the best natural mosquito repellents used in the form of incense sticks or smoke from crystals to drive mosquitoes out by its odour and toxicity. It is also used as a cockroach repellent. 2. Camphor is also kept with clothes to avoid the growth of tiny insects when stored for a long time. 3. Camphor oil is used as a fumigant against the red fire ants because it affects their climbing, attacking, and feeding behaviour. Practice CBSE Questions #### 4. Use in the Prevention of Rust Solid camphor is kept in the tool chest to protect against rust as it releases fumes that form a protective covering; as a result, it prevents exposure of metal to air and moisture that minimises the risk of rusting. #### 5. Use in Perfume 1. As camphor has an aromatic fragrance, it was widely used as a perfume in the ancient era, mainly in Arabic and Chinese perfumery. 2. Camphor oil is also used in scented candles. As we set fire to these candles, the fumes that come from them diffuse all over the place and spread their fragrance. #### 6. Camphor as Plasticiser A plasticiser is a substance added to the synthetic resin to make plastic more flexible and reduce its brittleness. Camphor was the first plasticiser used in cellulose nitrate, nitrocellulose lacquers, and other lacquers and plastics. ### Side Effects of Camphor 1. If camphor is taken in higher doses, it produces symptoms of disorientation, lethargy, muscle spasms, irritability, abdominal cramps, vomiting, and convulsions. In adults, two grams of camphor cause serious toxicity, whereas four grams prove to be potentially lethal. 2. We are often advised not to heat balms, Vicks, etc. These products contain camphor, and on heating, they may catch fire or explode and cause accidents. 3. If inhaled in large quantities, Camphor vapour products may be toxic, causing dizziness, vomiting, irritation, etc. 4. If undiluted camphor products are applied to the skin, it may cause skin redness and a burning sensation. #### Summary Camphor is a cyclic monoterpene ketone that belongs to the terpenoid. The chemical formula of camphor is $${{\rm{C}}_{10}}{{\rm{H}}_{16}}{\rm{O}}$$. Its IUPAC name is $${\rm{1,}}\,{\rm{7,}}\,{\rm{7}}$$-Trimethylbicyclo $${\rm{[2}}{\rm{.2}}{\rm{.1]}}$$ heptan-$$2$$-one, and it is also known as $$2$$-Camphonone; $$2$$-Dehydrocamphor; $$2$$-Keto-$${\rm{1,}}\,{\rm{7,}}\,{\rm{7}}$$-trimethylnolcamphane; $${\rm{DL}} – 2$$-Bornanone. It is an optically active compound, as it exists as an enantiomer having both $$‘{\rm{D}}’$$ and $$‘{\rm{L}}’$$ forms. Camphor is extracted from the bark and leaves of a tree, Cinnamomum camphora, primarily found in China, Taiwan, Korea, etc. Extraction of camphor follows distillation and sublimation. Since the pre-historic era, camphor has been used for its medicinal values and fragrance. It exists as white, translucent crystals and is highly flammable. Camphor has a wide range of applications, including medicinal uses as an anti-inflammatory and analgesic in balms to relieve pain and swelling. It is a natural insect repellent. Camphor is used in perfumery because of its aroma. As a result, it is possible to conclude that camphor is an essential natural organic compound. Attempt CBSE Mock Tests ### FAQs on Camphor Formula Let us look at some of the commonly asked questions about Camphor formula: Q.1. Is camphor basic or acidic? Ans: On oxidation, camphor is oxidised to an acid known as camphoric acid. Q.2. Is camphor absorbed by the body? Ans: Yes, camphor is absorbed by the body. It gets rapidly absorbed from the skin and gastrointestinal tract. If applied in large quantities, toxic effects can occur within minutes of exposure and may cause abdominal distress, excitement, dizziness, followed by CNS depression and other serious effects also. Q.3. Can you drink water after drinking camphor? Ans: Yes, we can drink water after drinking camphor as camphor is very volatile and can vaporise. It forms a layer upon water and gives a cooling effect also in case of acidity. Q.4. Does camphor help in the cold? Ans: Camphor is often used as an aerosol. For example, we use Vicks vapour rub, typically by steam inhalation or nebuliser treatment, to cure coughing and to relieve the upper respiratory tract congestion because of the common cold or bronchitis. Q.5. How bad is camphor for you? Ans: When taken in large quantities, camphor proves to be toxic as it produces symptoms of disorientation, lethargy, muscle spasms, irritability, abdominal cramps, vomiting, and convulsions. In adults, two grams of camphor cause serious toxicity, whereas four grams prove to be potentially lethal.
{}
Sunday February 19, 2017 math Grade 5 will plant 1/3 of the whole school garden. So far, they have planted ¼ of the whole school garden. What fraction of the school garden still needs to be planted by grade 5? Tuesday, May 3, 2016 by Drequan Math The following data set is the GPAs of the students in a statistics class. 1.93, 1.99, 2.00, 2.04, 2.12, 2.34, 2.55, 2.55, 2.75, 2.75, 2.80, 2.80, 2.85, 3.02, 3.12, 3.22, 3.31, 3.33, 3.45, 3.69 What percentile is a GPA of 2.34? A. About the 6th B. About the 15th C. About the ... Tuesday, May 3, 2016 by angela math In a GP, the sum of the 3rd and 4th terms is -4 /3, and the sum of the 4th and 5th terms is -4 /9. Find the 6th term Sunday, April 24, 2016 by Anonymous maths in a GP, the sum of the 3rd and 4th terms is -4/3 and the sum of the 4th and 5th terms is -4/9. Find the 6th term. Sunday, April 24, 2016 by glory sequences and Series If 5 times the 5th term is equal to the 6 times the 6th term of an arithmetic sequence then its 11th term is....... Thursday, April 21, 2016 by Wawen 1=300(0.8)^t Monday, April 18, 2016 by Yineya How do we know which plate has an excess of electrons? Tuesday, April 12, 2016 by angy Math There are 55 student in eight grade middle school that play different sports. 27 of them play soccer, 15 of them play basketball. 11 of them play soccer or basketball. What is the probability given that the they are in eight grade and play sports that they play both soccer and... Thursday, March 31, 2016 by Korbin Math Do you have any 3rd grade math practice worksheets that I can use with my students? One of my students has a predicament of not doing his homework. Please implement your time and implementing a few practice worksheets for my students. Sincerely, Mrs. Raja 3rd grade teacher Tuesday, March 29, 2016 by Mrs. Raja MATHS GP IN A GP if the sum of 2nd and 4th terms is 30 . The difference of 6th and 2nd term is 90 . Find the 8 the term I got r = √13/2 Is it correct? Pls answer Wednesday, March 23, 2016 by Venkatesh Science What if a student gets a 65 on 1 test and 100% on 4 tests? What will there grade be? Wednesday, March 23, 2016 by Anonymous math help pls pls pls Choose the correct simplification of the expression the quantity a to the 2nd power over b to the 3rd power all raised to the 4th power. A. a to the 6th power over b to the 7th power B. b^12 C. a^8b^15 D. the quantity a to the 8th power over b to the 12th power pls help me Monday, March 21, 2016 by Oscar Programming C++ using Data Structure. In an academic institution, Student has its records. Each student has his/her profile such as id number, student name. The institution also keeps records of student grades in each subject. Student grades contains information subject description, school year, semester, and ... Thursday, March 17, 2016 by Vonn math a student’s grade in a class is simply the mean of five 100-point exams a. If the student has grades of 77, 73, 97, and 89 on the first four exams, what is the students’ grade before taking the last exam? b. What is the lowest score that the student can earn on the last ... Monday, March 14, 2016 by Debbie 7+5=8+ What number makes this equation true Friday, March 11, 2016 by Milan Each pail of plaster covers 90 square feet of ceiling. What is the least number of pails of plaster you would need to buy to cover the ceiling of a room with walls 14 feet long? 5 pails 6 pails***** 4 pails 3 pails Tuesday, March 8, 2016 by dasha Calculate and compensate 823-273 Sunday, March 6, 2016 by Kgahli World history 6th 2 more questions The Ayurveda is a Hindu book of knowledge that A. provides instructions for large sale agriculture B. describes the origins of Hindu gods and goddesses C. Recounts the historical migrations of the Aryans D. Contains ancient Hindu knowledge of medicine Wednesday, March 2, 2016 by dasha Geometry In the diagram to the left, ÚABC\angle ABCÚABCangle, A, B, C and ÚDCB\angle DCBÚDCBangle, D, C, B are right angles. Which of the following is closest to the length of DE‾\overline{DE} ​DE ​ ​​start overline, D, E, end overline? I am ... Tuesday, March 1, 2016 by Tyteana Correct punctuation of , That is in my opinion depressing. Friday, February 26, 2016 by Debbie Statistics The mean grade in this class last semester was 78.3, and the variance was 49 . The distribution of grades was unimodal and symmetrical. Using this information determine the probability that “Joe”, a random student you know nothing about, other than the fact that they are ... Thursday, February 25, 2016 by Francisco Algebra A math class has 7 girls and 5 boys in the seventh grade and 4 girls and 4 boys in the eighth grade. The teacher randomly selects a seventh grader and an eighth grader from the class for a competition. What is the probability that the students she selects are both boys? Write ... Monday, February 22, 2016 by Anesha Math A science class has 3 girls and 3 boys in the seventh grade and 4 girls and 1 boy in the eighth grade. The teacher randomly selects a seventh grader and an eighth grader from the class for a competition. What is the probability that the students she selects are both boys? Sunday, February 14, 2016 by Brooke MS.SUE! this weekend can you help me with the second part of my grade recovery Thursday, February 11, 2016 by ashleydawolfy Algebra 1 -2, -12, -72, -432 find the 8th term 0.2, 1.6, 12.8, 102.4 find the 6th term 24, 12, 6, 3 find the 7th term Thursday, February 4, 2016 by Lina News for Ms. Sue 1. A student raises her grade average from a 75 to a 90. What was the percent of increase in the student’s grade average? Round your answer to the nearest tenth of a percent, if necessary. A: 20% B: 16.7% C: 8.3% D: 15% Hi Ms. Sue, it was 20%. Thank you for your help, though... Tuesday, February 2, 2016 by Lizzie math if the lecture grade is the mastering points worth 20% and exams worth 80%. what is the lecture grade? mastering points: 1513 exam grades: 92,83,76,95,95 thank you! Sunday, January 24, 2016 by lauren Is 1 pirme or composte Monday, January 18, 2016 by E which course can I take after my grade 12? Wednesday, January 13, 2016 by MODISE BOITUMELO Modernist Poetry I want to write a modernist poem about struggles, future, stress. How everything I do in grade 12 will affect my future. The amount of pressure that is put onto me by my parents to get into university. I need a lot of help. I've always struggled with poems. I really need help... Tuesday, January 12, 2016 by Poemhelp-Please math for a game , three people are chosen in the first round. Rach those people chooses 3 people in the second round and so on. how many people are chosen in the 6th round. Monday, January 11, 2016 by byron Can you help me solve: 5sin2x + 3sinx - 1 = 0 Thanks Sunday, January 10, 2016 by Carol algebra A student's grade in a course is the average of 4 test grades and a final exam that is worth three times as much as each test. Suppose a student has test grades of 90, 88, 81, and 94. Write an equation to model this situation where x is the student's grade on the final exam ... Friday, January 8, 2016 by kayla Melissa needs to subtract 10 5/8 from 12 6/8, what is her answer? Tuesday, January 5, 2016 by Anonymous Solve the equation. 1. 4-5 __ = 1 I cannot figure this out. 3 Monday, December 21, 2015 by Help needed The fifth-grade class at John Adams Middle School earned $428.81 at their school carnival. The money is being divided between 23 students to help pay for their class trip. How much money will each student receive? Round to the nearest hundredth. is it$18.60 ? Wednesday, December 16, 2015 by patricia electrical wire: $1.49 for 3 ft$.69 for 18 in which is a better buy? Tuesday, December 15, 2015 by Emmy Math I'm in sixth grade and I need help on 3(12w+8)+17+9w Tuesday, December 8, 2015 by bob Math Fifteen Grade 2 children rode bicycle to school. Seventeen Grade 3 children rode bicycle to school. How many children in Grades 2and3 rode bicycles to school? Thursday, December 3, 2015 by Alex Maths The sixth term of GP is 16 and the third term is 2.Find the first term and common ratio. 6th term =16 3rd term=2 So ar^5=16 and ar^2=2, then divide. So r^3=8, and r^3 =2^3. Therefore r =2 and a=1/2 Thursday, December 3, 2015 by Vin Mwale Medical Billing and coding Can I retake a Quizz to help raise my grade? Thursday, November 26, 2015 by Melodee Ravi resigned----------illness Monday, November 16, 2015 by Anonymous 5.85gof Naclis Dissolvedin250mlofsolution.Calculatethem​ass%ofsolute? Sunday, November 8, 2015 by Minibel Math Am stuck A fifth grade class of 32 students is going on a field trip. They want to rent vans to drive to the field trip. Each van seats a maximum of 7 students. What is the least number of vans the fifth grade class must rent so that each student has a seat? If I times it that... Thursday, November 5, 2015 by iyana The sum is 1.7 and the product is 6.3. What are the numbers? Tuesday, November 3, 2015 by Kevin mathematics the Fourth term of an a.p is 37 and the 6th term is 12 more than the Fourth term.find the first and seventh terms Tuesday, November 3, 2015 by arithmetic progression programming ( qbasic) i was given this program by my teacher. it converts decimal number to binary number.. Why is that FIX kept in front of (n/2) ...6th line of the program. why is that FIX written and why? i didn't get it CLS INPUT “ENTER A NUMBER”; N WHILE N <> 0 A = N MOD 2 B$= STR$(... Tuesday, November 3, 2015 by mark 32/4=b+4/3 Monday, November 2, 2015 by Cole Grammar 2.Brianna eats chocoloate whenever she gets poor grade in math. whenever she gets a poor grade in math is underlined. I believe this is a dependent clause. 3. After the house flooded, the family moved into a temporary shelter. After the house is flooded is underlined. I ... Thursday, October 15, 2015 by Patrick Where is the hundred thousandths place in 46.15398? Thursday, October 15, 2015 by Grace (The Puzzled Penguin) Computer/Math In an examination five pass grade and one fail is awarded a. For mark between 70 and 100 b. For mark between 60 and 69 c. For mark between 50 and 59 d. For mark between 45 and 49 e. For mark between 40 and 44. Write a program to input an mark for a student and print the grade... Tuesday, September 22, 2015 by Tosin explain how to compare 8,563 and 8,699 Thursday, September 10, 2015 by debbie -0.5167 is this number rational or irrational ? Thank you Wednesday, September 9, 2015 by Holly Algebra Students get two grades for a test in an introductory psychology course: a letter grade, and a numerical equivalent (weighted value). So an A is worth 4 points, a B is worth 3 points, a C is worth 2 points, a D is worth 1 point, and an F is worth 0 points. The results for the ... Wednesday, September 9, 2015 by Camry What does 5*6 mean? Tuesday, September 8, 2015 by Donna social and environmental issues that cause ill - health Tuesday, September 1, 2015 by nokwanda I am in 8th grade math as a sixth grader. I am stuck on one problem. It is this: You are making a necklace that is 9 inches long. You use 6 beads for each inch. what integer is the change in your supply of beads after making the necklace? Monday, August 31, 2015 by Casey Math: Mean, Median, Mode, Range & Finding the devi What is deviation? And what is the definition of frequency distribution & how would I solve a problem in 8th grade math with this problem...My child is asking about this problem she has in an 8th grade book. Given the following numbers 2,2,3,6,1,9,4,2,5,7,6,8,6, What is the ... Friday, August 21, 2015 by Aayla What's elasticity? Monday, August 17, 2015 by Sm need help There are 205 students in the middle school. If 41 students are girls in 8th grade, what percent of the middle school do the girls in 8th grade represent? Tuesday, August 11, 2015 by Anonymous Monday, August 10, 2015 by Clarissa Math The distance s in feet that a body falls in t seconds is given by the formula s=16t^2. If a body has been falling for 5 seconds, how far will it fall during the 6th second Monday, August 10, 2015 by Anonymous what can be a word problem that can be represented by the equation 1) 1/4 + 3/4=? 2- 7/8=? Friday, July 31, 2015 by anonymous English Posted by rfvv on Monday, July 13, 2015 at 9:33am. Lesson Plan .... Standards Addressed: General Goal9s): Specific Objectives: ...... ===================== In a leson plan, what is 'Standards Addressed'?•English - Writeacher, Monday, July 13, 2015 at 9:42am Standards are ... Tuesday, July 14, 2015 by rfvv Math One half the members of the junior high math team are ninth graders. One fourth of the remaining members are eighth graders. One fifth of the rest are seventh graders. There are twelve 6th graders on the team. How many members on the team? Monday, July 13, 2015 by Tim precalculus If you are driving down a 10% grade, you will alway be 10% below where you would have been if you could have driven on flat ground from where you started. Suppose you descend to where you are 20 feet below your starting point. What concept does he "grade" relate to and how far... Saturday, June 6, 2015 by Lois What is he price of a hamburger and cheeseburger at this restaurant h+3c=$9.00 2h+c=$5.50 Wednesday, June 3, 2015 by Mythreyee 51 is 4 times the number of books, less 13 Wednesday, May 27, 2015 by Mythreyee 51 is 4 times of book is less 13 Wednesday, May 27, 2015 by Mythreyee The sum of 6 and the product of 7 and a number is 75 Wednesday, May 27, 2015 by Mythreyee Algebra Is factoring equations a basic 9th grade skill? Friday, May 22, 2015 by Peach what is a common factor Thursday, May 21, 2015 by seahawks math lilly made a 75 on her history test and an3 on her math test. the mean grade on the history test was 72 with a standard deviation of 5. the average grade on the math test was 81 with a standard deviation of 4. which test did lilly do better? Thursday, May 21, 2015 by debbie math I have a question on my math that my teacher cant help me with because this one is for try it. Can you help me see if it is right?: It says: 12 . Wich is equal to 30 hundreths? A)3 thousandths B)3tenths C)3 tens D)3thousands I think the answer is B but, it really confuses me ... Thursday, May 7, 2015 by nicole English WORDS: Averse Detract Disdain Divulge Elation Endow Expulsion Mortified Nullified Ominous ------------- Some people are so (1)averse to living near a nuclear plant that they want the plant’s license to be (2)nullify. They say the plant infringes on every homeowner’s right ... Friday, April 24, 2015 by Sora math the fourth terms of an AP is 37 and the 6th terms is 12 more than the fourth term.Find the first and seventh terms. Tuesday, April 21, 2015 by Dallas math Sarah was trying to figure out her grades for her math course. She only knew the grade of 4 tests she had taken, however the class had taken 5 tests total. She scored a 94, 67, 83 and 93. If Sarah ended up with a grade of 90, what did she get on the 5th and final test? Saturday, April 18, 2015 by Jay Algebra (check my work?) 1. Write a proportion using an unknown value x and the ratio 5:6 then solve it. 5/6 = x/12 6 * 2 = 12 5 * 2 = 10 x = 10 2. In an orchestra, the ratio of trombones to violas is 1 to 3. There are 9 violas, Write a proportion that gives the number t of trombones in the orchestra... Friday, April 17, 2015 by Ivy connexus Thursday, April 16, 2015 by Oscar algebra what is the first step when solving x/15=3/5? write a proportion using an unknown value x and the ratio 5:6 then solve it. in an orchestra, the ratio of trombones to violas is 1 to 3. a. there are 9 violas. write a proportion that gives the number t of trombones in the ... Wednesday, April 15, 2015 by phan why does little red riding hood go for a walk in the woods? Tuesday, April 14, 2015 by jessica (y2-3y) Monday, April 13, 2015 by Sw Math Jeremy uses the linear function G=12h+50 to represent the grade, G (in points out of 100), that he can earn on an exam as a function of h, the number of hours he spends studying for the exam. 1.) Identify the slope and y-intercept of Jeremy's function and explain what they ... Thursday, April 9, 2015 by Hailey self I have 10 letters. My 1st and 2nd letter means not. My 1st to 5th letter means below. My 6th to 10th letter is to stand. In all, I am when you know. What am I? Thursday, April 2, 2015 by john maths Paul wrote five tests which were marked out of a total of 200 marks. Paul scored 40 out of 50 in a 6th test. What was his NEW average? HoW do I work out this type of sum. Monday, March 30, 2015 by Anonymous Mathematics Paul wrote five tests which were marked out of 200 marks. His average was 60%. Paul scored 40 out of 50 in a 6th test. What was his NEW average? Monday, March 30, 2015 by Shawnaley History Even though I asked this before, can you please help me with this and yes it's a crossword puzzle. My teacher gave us clues that don't really help at all and I've looked back in my textbook but I can't find the words for each clue. 1.Equality of Sexes 7 letters. the 5th letter... Sunday, March 29, 2015 by Jamie Math Sarah was trying to figure out her grades for her math course. She only knew the grade of 4 of the tests she had taken, however the class had taken 5 tests total. She scored a 94, 67, 83, and 93. If Sarah ended up with a grade of 90, what did she get on the 5th and final test... Thursday, March 26, 2015 by Bella Maths lit Wednesday, March 25, 2015 by Nokulunga Math A ball is dropped vertically. It reaches a height of 1.6m on the first bounce. The height of each subsequent bounce is 80% of the previous bounce. A. Find the height the ball reaches on the 6th bounce. B. Find the sum of the first seven terms of this sequence. Thursday, March 19, 2015 by Sam what is the standard form of 213 Wednesday, March 18, 2015 by Anonymous math Aisha invests twice the amount invested the previous month. If she invested \$26.25 during the 1st month, how much did she invest during the 6th month? Wednesday, March 18, 2015 by jill In the choir, 1/2 of the members are boys and 1/2 are girls. If 1/4 of the boys are5th graders what fraction of the whole choir is 5th grade boys? Thursday, February 26, 2015 by ary Algebra 1 Earth has a radius of about 6.4 x 10 to the 6th power m. Approximate the surface area of earth using the formula for the surface area is a sphere, S = 4 pi r squared. Earths surface is about 70 percent water. About how many square meters of earths surface are covered with ... Thursday, February 19, 2015 by Lauren math the 6th term of A.P. is 5 times the fifth term and the 11th term is exceeds twice the fifth term by three.find the 8th term Thursday, February 19, 2015 by ap Alyssa's shelves hold 31 books each. How many shelves will Alyssa need if Alyssa has 248 books? Sunday, February 8, 2015 by haley math A student’s final grade in chemistry is determined by the following weights: Quizzes 5% Exam 1 20% Exam 2 20% Lab Reports 15% Research Paper 15% Final Exam 25% The student received the following grades: Lab reports: 75 80 70 75 85 69 90 75 Quizzes: 85 60 70 0 75 80 80 80 ... Friday, February 6, 2015 by Jayden Probability Alfred and Bob toss a coin alternatively, and whoever gets a head first will win. They play many times and whoever loses in the previous play will toss first at the next play. If Eric tosses first at the first play, then what is the probability that Eric will win at the 6th play? Friday, February 6, 2015 by James maths Joe eats 2/5ths of pizza, Jane eats 1/6th, how many has Joe eaten more than Jane ? Tuesday, January 27, 2015 by debs
{}
# Position probability in finite wells The normalized states of a particle in a well are $\varphi_{even}(x)=\sqrt{a+(1/q)}*\left\{\begin{matrix} e^{q a}\cos(k a)e^{q x} & \text{for } x < -a \\ \cos(k x) & \text{for }-a\leq x \leq a \\ e^{q a}\cos(k a)e^{-q x} & \text{for } x > a \end{matrix}\right.$ $\varphi_{odd}(x)=\sqrt{a+(1/q)}*\left\{\begin{matrix} -e^{q a}\cos(k a)e^{q x} & \text{for } x < -a \\ \sin(k x) & \text{for }-a\leq x \leq a \\ e^{q a}\cos(k a)e^{-q x} & \text{for } x > a \end{matrix}\right.$ Find the probability that a particle in the ground state of a finite well is measured to have a position outside of the well. Put the expression in terms of: (1) $k$, $q$ and $a$ and (2) $z$ and $z_0$. In the ground state the wave function is even (see the figure at the bottom), that is symmetrical with respect to x=0. $\varphi_{even}(x)=\sqrt{a+(1/q)}*\left\{\begin{matrix} e^{q a}\cos(k a)e^{q x} & \text{for } x < -a \\ \cos(k x) & \text{for }-a\leq x \leq a \\ e^{q a}\cos(k a)e^{-q x} & \text{for } x > a \end{matrix}\right.$ The probability to find the particle outside the well (in the walls) is $P=\int_{-\infty}^{-a} |\varphi|^2 d x+\int_a^{+\infty}|\varphi|^2 d x=$ $=[a+(1/q)]*e^{2qa} [\cos(k a)]^2 *[\int_{-\infty}^{-a} e^{2qx}*d x+ \int_a^{+∞} e^{-2qx} d x]$ $P=[a+(1/q)] e^{2qa} [cos⁡(k a) ]^2 *[(1/2q)(e^{-2qa}-1)-(1/2q)(1-e^{-2qa})]$ $P=\frac{1}{4q} [a+\frac{1}{q}]* [cos⁡(k a)]^2*e^{2qa} [e^{-2qa}-1]=\frac{1}{4q}[a+\frac{1}{q}]*[cos⁡(k a) ]^2*(1-e^{2qa})$ If we make the notations $z_0=(\sqrt{2mV_0 }/\hbar)*a$  and $z=(\sqrt{2mE_1}/\hbar)*a$ We have $k a=(\sqrt{2mE_1}/\hbar)*a=z$   inside the well,  and $q a=\sqrt{z_0^2-z^2}$  outside the well And the probability is $P=\frac {a} {4\sqrt{z_0^2-z^2 }} (a+\frac{a}{\sqrt{z_0^2-z^2}}) cos^2⁡ z*[1-exp⁡[2\sqrt{(z_0^2-z^2 )} ]]$
{}
# Cayley - Hamilton Theorem on higher powers 1. Nov 14, 2013 ### KMjuniormint5 1. The problem statement, all variables and given/known data Given the following matrix A = [3 -1; -1 3] Find C = (0.5*A - I)100 2. Relevant equations Using the knowledge that the Cayley - Hamilton Theorem must satisfy its own characteristic polynomial. 3. The attempt at a solution Here the characteristic polynomial is λ2 - 6*λ + 8. When we plug in A = λ we can verify that the solution A2 - 6*A + 8 = 0 which it does. If I solve by hand/brute force way, I get C = [0.5 -0.5; -0.5 0.5] regardless of what the exponential is (here the exponential is 100). As far as applying the Cayley-Hamilton theorem, I am not connecting the two. I understand how to solve for A^3 or A^4, √A, or exp(A) but not for this problem. Any help in the right direction will be greatly appreciated. 2. Nov 14, 2013 ### KMjuniormint5 Looking at it at a different angle take w = 0.5*A - I and find the characteristic equation which is λ2 -λ = 0. We see that λ2 = λ. Plug in w for λ Now for w3 we can do w2*w but that is w*w and that is w. So for w100=w99*w= ... = w*w = w. In the end we see that w = C = [0.5 -0.5; 0.5 -0.5]. Was that the right approach? 3. Nov 14, 2013 ### Ray Vickson A much, much simpler way is to note that if the eigenvalues of A are distinct (which they are in this case) then for any polynomial function f(x) we have $$f(A) = E_1 f(\lambda_1) + E_2 f(\lambda_2),$$ with the matrices $E_1, E_2$ being the same for any function f. You can find E1 and E2 in this case by using, for example, $$f(x) = x^0 = 1 \Longrightarrow f(A) = I = E_1 \lambda_1^0 + e_2 \lambda_2^0 = E_1 + E_2$$ and $$f(x) = x \Longrightarrow f(A) = A = E_1 \lambda_1 + E_2 \lambda_2$$ Now it is easy to compute f(A) for $$f(x) = (0.5 x - 1)^{100}.$$
{}
# Mind the Croc! Rationality Gaps vis-à-vis the Crocodile Paradox • Published in 2015 In the collections This article discusses rationality gaps triggered by self-referential/cyclic choice, the latter being understood as choosing according to a norm that refers to the choosing itself. The Crocodile Paradox is reformulated and analyzed as a game—named CP—whose Nash equilibrium is shown to trigger a cyclic choice and to invite a rationality gap. It is shown that choosing the Nash equilibrium of CP conforms to the principles Wolfgang Spohn and Haim Gaifman introduced to, allegedly, guarantee acyclicity but, in fact, does not prevent self-referential/cyclic choice and rationality gaps. It is shown that CP is a counter-example to Gaifman's solution of the rationality gaps problem. ## Other information issn 0144-5340 journal History and Philosophy of Logic language en pages 1--13 publisher Taylor & Francis ### BibTeX entry @article{Gerogiorgakis2015, abstract = {This article discusses rationality gaps triggered by self-referential/cyclic choice, the latter being understood as choosing according to a norm that refers to the choosing itself. The Crocodile Paradox is reformulated and analyzed as a game—named CP—whose Nash equilibrium is shown to trigger a cyclic choice and to invite a rationality gap. It is shown that choosing the Nash equilibrium of CP conforms to the principles Wolfgang Spohn and Haim Gaifman introduced to, allegedly, guarantee acyclicity but, in fact, does not prevent self-referential/cyclic choice and rationality gaps. It is shown that CP is a counter-example to Gaifman's solution of the rationality gaps problem.}, author = {Gerogiorgakis, Stamatios}, issn = {0144-5340}, journal = {History and Philosophy of Logic}, language = {en}, month = {jun}, pages = {1--13}, publisher = {Taylor {\&} Francis}, title = {Mind the Croc! Rationality Gaps vis-{\{a}}-vis the Crocodile Paradox}, url = {http://www.tandfonline.com/doi/full/10.1080/01445340.2015.1046211{\#}.VegVv{\_}lViHg}, year = 2015, urldate = {2015-09-03}, collections = {Attention-grabbing titles,Animals} }`
{}
# What is the formula  for thepack ingfraction in soild State Arun 25757 Points 4 years ago • Packing faction or Packing efficiency is the percentage of total space filled by the particles. • Both  hcp & ccp though different in form are equally efficient. They occupy the maximum possible space which is about 74% of the available volume. Hence they are called closest packing. • In addition to the above two types of arrangements a third type of arrangement found in metals is body centred cubic (bcc) in which space occupied is about 68%. • Packing Efficiency =
{}
Journal article 606 views ### A high statistics lattice calculation of heavy-light meson decay constants F. Rapuano, G. Martinelli, M. Crisafulli, Leonardo Giusti, L. Conti, Chris Allton Physics Letters B, Volume: 405, Issue: 1-2, Pages: 133 - 141 Swansea University Author: Full text not available from this repository: check for access using links below. DOI (Published version): 10.1016/S0370-2693(97)00580-7 Abstract We present a high statistics study of the D- and B-meson decay constants. The results were obtained by using the Clover and Wilson lattice actions at two different values of the lattice spacing $a$, corresponding to $\beta=6.0$ and 6.2. After a careful analysis of the systematic errors present in th... Full description Published in: Physics Letters B 03702693 1997 https://cronfa.swan.ac.uk/Record/cronfa28494 No Tags, Be the first to tag this record! first_indexed 2016-06-02T18:23:41Z 2018-02-09T05:12:41Z cronfa28494 SURis 2016-08-08T12:35:49.6942623v2284942016-06-02A high statistics lattice calculation of heavy-light meson decay constantsde706a260fa1e1e47430693e135f41c70000-0003-0795-124XChrisAlltonChris Alltontruefalse2016-06-02SPHWe present a high statistics study of the D- and B-meson decay constants. The results were obtained by using the Clover and Wilson lattice actions at two different values of the lattice spacing $a$, corresponding to $\beta=6.0$ and 6.2. After a careful analysis of the systematic errors present in the extraction of the physical results, by assuming quite conservative discretization errors, we find $f_{D_s}=237 \pm 16$ MeV, $f_{D} = 221 \pm 17$ MeV ($f_{D_s}/f_D=1.07(4)$), $f_{B_s} = 205 \pm 35$ MeV, $f_{B} = 180 \pm 32$ MeV ($f_{B_s}/f_B=1.14(8)$), in good agreement with previous estimates.Journal ArticlePhysics Letters B4051-213314103702693311219971997-12-3110.1016/S0370-2693(97)00580-7http://inspirehep.net/record/440850COLLEGE NANMEPhysicsCOLLEGE CODESPHSwansea University2016-08-08T12:35:49.69426232016-06-02T15:06:58.3987998Faculty of Science and EngineeringSchool of Biosciences, Geography and Physics - PhysicsF.Rapuano1G.Martinelli2M.Crisafulli3LeonardoGiusti4L.Conti5ChrisAllton0000-0003-0795-124X6 2016-08-08T12:35:49.6942623 v2 28494 2016-06-02 A high statistics lattice calculation of heavy-light meson decay constants de706a260fa1e1e47430693e135f41c7 0000-0003-0795-124X Chris Allton Chris Allton true false 2016-06-02 SPH We present a high statistics study of the D- and B-meson decay constants. The results were obtained by using the Clover and Wilson lattice actions at two different values of the lattice spacing $a$, corresponding to $\beta=6.0$ and 6.2. After a careful analysis of the systematic errors present in the extraction of the physical results, by assuming quite conservative discretization errors, we find $f_{D_s}=237 \pm 16$ MeV, $f_{D} = 221 \pm 17$ MeV ($f_{D_s}/f_D=1.07(4)$), $f_{B_s} = 205 \pm 35$ MeV, $f_{B} = 180 \pm 32$ MeV ($f_{B_s}/f_B=1.14(8)$), in good agreement with previous estimates. Journal Article Physics Letters B 405 1-2 133 141 03702693 31 12 1997 1997-12-31 10.1016/S0370-2693(97)00580-7 http://inspirehep.net/record/440850 COLLEGE NANME Physics COLLEGE CODE SPH Swansea University 2016-08-08T12:35:49.6942623 2016-06-02T15:06:58.3987998 Faculty of Science and Engineering School of Biosciences, Geography and Physics - Physics F. Rapuano 1 G. Martinelli 2 M. Crisafulli 3 Leonardo Giusti 4 L. Conti 5 Chris Allton 0000-0003-0795-124X 6 A high statistics lattice calculation of heavy-light meson decay constants A high statistics lattice calculation of heavy-light meson decay constants Chris Allton A high statistics lattice calculation of heavy-light meson decay constants A high statistics lattice calculation of heavy-light meson decay constants A high statistics lattice calculation of heavy-light meson decay constants A high statistics lattice calculation of heavy-light meson decay constants A high statistics lattice calculation of heavy-light meson decay constants de706a260fa1e1e47430693e135f41c7 de706a260fa1e1e47430693e135f41c7_***_Chris Allton Chris Allton F. Rapuano G. Martinelli M. Crisafulli Leonardo Giusti L. Conti Chris Allton Journal article Physics Letters B 405 1-2 133 1997 Swansea University 03702693 10.1016/S0370-2693(97)00580-7 Faculty of Science and Engineering facultyofscienceandengineering Faculty of Science and Engineering facultyofscienceandengineering Faculty of Science and Engineering School of Biosciences, Geography and Physics - Physics{{{_:::_}}}Faculty of Science and Engineering{{{_:::_}}}School of Biosciences, Geography and Physics - Physics http://inspirehep.net/record/440850 0 0 We present a high statistics study of the D- and B-meson decay constants. The results were obtained by using the Clover and Wilson lattice actions at two different values of the lattice spacing $a$, corresponding to $\beta=6.0$ and 6.2. After a careful analysis of the systematic errors present in the extraction of the physical results, by assuming quite conservative discretization errors, we find $f_{D_s}=237 \pm 16$ MeV, $f_{D} = 221 \pm 17$ MeV ($f_{D_s}/f_D=1.07(4)$), $f_{B_s} = 205 \pm 35$ MeV, $f_{B} = 180 \pm 32$ MeV ($f_{B_s}/f_B=1.14(8)$), in good agreement with previous estimates. 1997-12-31T03:31:30Z 1756235500855754752 10.926569
{}
#### THE_RB Joined Feb 11, 2008 5,438 It's common to need to scale a microcontroller ADC result to give a value in some other scale. One example is to scale the ADC to read in actual volts like read from 0.00v -> 5.00v. People sometimes use ADC *500 /1023 as this gives a reading of 500 (5.00v) for the top ADC value of 1023. Doing that math; *x/(n-1) or *x/1023 does perform a kind of rounding function, but it "fudges" the rounding and gives both scaling and rounding errors. The correct scaling math; *x/n or *x/1024 correctly scales all the output data in size, but gives an average rounding error of 0.5 ADC counts (always rounding down). To magnify and show the errors caused by *x/(n-1) this table shows a simple ADC that has 5 values. (This is just like a PIC ADC but has 5 possible ADC values, not 1024). Rich (BB code): First the incorrect *x/(n-1) math; input ADC math result average output 4.00-4.99 4 *5 /4 5.00 +0.50 5 3.00-3.99 3 *5 /4 3.75 +0.25 3 2.00-2.99 2 *5 /4 2.50 0.00 2 1.00-1.99 1 *5 /4 1.25 -0.25 1 0.00-0.99 0 *5 /4 0.00 -0.50 0 Note that for the 5 possible ADC values, the output scale now has 6 values (0-5) and the value 4 can never occur! And although the average error might look to be nicely distributed +/- the centre of scale, the actual output result proves to be much uglier. So if you had a slowly increasing voltage, the ADC would read; 0, 1, 2, 3, 5!! The maximum error in real world use is 2v, because that very tiny change of 3.999v - 4.001v would cause an output change of 3v -> 5v or a change of 2v! Rich (BB code): Here is the correct scaling math *x/n; input ADC math result average output 4.00-4.99 4 *5 /5 4.00 -0.50 4 3.00-3.99 3 *5 /5 3.00 -0.50 3 2.00-2.99 2 *5 /5 2.00 -0.50 2 1.00-1.99 1 *5 /5 1.00 -0.50 1 0.00-0.99 0 *5 /5 0.00 -0.50 0 All output values are properly scaled and all are represented. The average error is never greater than 0.5 (no more average error than the /(n-1) example), and the average error is always one fixed value (-0.5) making it very easy to compensate (see below). The maximum error at any time is 1v, this is half the max error of the /(n-1) example, which can introduce extra error up to 1 in high value ADC readings. Understanding the *x/(n-1) problem. The problem with /(n-1) is that it corrupts the SCALING of data. Because it forces the top ADC unit value to be the top of the scale the scale no longer is an accurate conversion of data. This corruption means the true scale of 1024:5 is being represented as 1023:5 so the data on the output is now larger than life; However if the correct scaling is used; ADC*5/1024, then any input data is correctly represented at the output; All data is rounded down to the nearest ADC unit, so all the output data is correctly scaled but will have an average error of -0.5 ADC units. This error is actually a property of the ADC module hardware AND the ratio scaling math. This is because the ADC hardware rounds all voltages down to the nearest ADC unit, and because the integer division (/1024) also rounds data down. (More on compensating for this later). ---continued--- #### THE_RB Joined Feb 11, 2008 5,438 Understanding ratios and ratio math! A ratio is one scale compared to another. The result can always be calculated as *x /y (or *y /x) and will always appear as a linear line on a graph. Forget ADCs for the moment and let's look at ratio scaling two other real world scales. For instance scaling Hz to RPM. The ratio is known; 1 Hz = 60 RPM, so the ratio is 60:1 and we can convert Hz to RPM by doing the math; Hz *1 /60 = RPM. This math will give perfect scaled conversion of Hz to RPM. Note that there is NO correct solution using /(n-1) and *1 /59 will NOT properly convert Hz to RPM! With a PIC ADC module there are exactly 1024 ADC units (numbered 0-1023) for every 5v. The ratio to calculate voltage from ADC units is 1024:5 and that ratio is performed with this math; *5 /1024. This ratio can be shown as a graph; If the ADC happens to have 11bits (has 2048 ADC units) and runs from 10v the ratio remains the same, as the relationship of ADC units to voltage is still 1024:5. Doing the incorrect math of ADC *5 /1023, gives ONE advantage, it means that the top ADC reading of 1023 will give an output result of "5v". The problem is that doing this kludge in code gives incorrect scaling of ALL the ADC data by adding more +error to the data as the values increase; In some cases you may be able to tolerate this error, if you don't mind the two problems; Problem 1. Higher values are always rounded up, lower values are always rounded down. Problem 2. The output data (waveforms etc) are shown to be larger than reality (scaling error). Doing the ratio scaling math right! First let's clear one thing up, the ratio scaling math of *x /n is perfect, it has no error. But there are two rounding down errors that we need to deal with; 1. The ADC hardware itself causes a rounding down error. Every possible input voltage is reounded down to the nearest ADC unit. This error occurs BEFORE any ratio math is done, and can be compensated by adding 0.5 ADC counts to every ADC sample BEFORE any conversion. Since it is impossible to add 0.5 in integer math the best way is to double the ADC value, and then just add 1, ie; (ADC*2)+1 2. The math *x /n does not introduce error with the *x but the /n operation using integer math causes a rounding down error to the nearest n unit. This integer division rounding down error can be compensated by adding half the divisor; +0.5n /n which in our case is +512 /1024. However since we have the ADC value already doubled from the previous operation, we need to divide by double, or 2048. So it becomes; +1024 /2048. Putting it all together. To get a reading of 0.00v to 5.00v from the PIC ADC can be done using the correct data scaling of all samples, and properly compensated integer rounding on all samples, by the following integer math; Using *x of *500 means we are converting 1024 ADC units to 500 output units (which represent 0-500 ie 0.00v to 5.00v). Last edited: #### thatoneguy Joined Feb 19, 2009 6,359 Thank you for taking the time to illustrate and explain the details! Which is faster in assembly language? Dividing by a non power of 2, or working with longer binary numbers? #### t06afre Joined May 11, 2009 5,934 I have been puzzled why many have been so stubborn about this matter. Like shown in your RPM to Hz example. If 1 Hz = 60 RPM everybody would intuitively used the ratio 60:1 Not 59:1. Why should this change then it comes to data conversion? I hope this very pointless debate on this matter. Will end with this excellent proof. Can we make this thread a sticky #### WBahn Joined Mar 31, 2012 26,398 I have stated, several times, that the scaling used in converting from the ADC output to the value represented has to match the scaling of the value being measured to the ADC output that results from it. If you have an N-bit ADC that has a reference voltage and the internal scaling is such that the reference voltage should be converted to an ADC output of 2^N (assuming this were an obtainable output) then using a divisor of 2^N is the correct value to use. I have never said otherwise. Let's look at the question that started this round of debate: Need the formula to convert from a 10 bit ADC value to a voltage to be displayed on an LCD with 5V full scale and 2.5V ref op amp to ADC.. So 0 volts is 2.5V (or bit 511 or 512) and -75V is bit 0 (0V op amp) and 75V is bit 1023 (5V op amp) Note that there is no mention of a Vref input to this unspecified ADC. There is only a terse description of some of the external signal conditioning circuitry. But what there is, very explicity, is the statement that -75V corresponds to an ADC output of 0 and +75V corresponds to an ADC output of 1023. As I said in that thread, "And the OP specified that he wanted the top count, 1023, to map to +75V. It's up to him to adjust his preamp circuits to make that happen, but the conversion from ADC output to display value should be based on what he has stated his scaling intentions are." Not uncommonly you design a system that measures some quantity and part of your goal is to have a specific relationship between that measurement and the ADC output. For instance, you want 1 lsb to represent 1 gram. So you adjust your circuits, which may or may not include adjusting the Vref of the ADC, so that you get and ADC output of 1000 when you have 1kg on your scale. Other times you simply put a weight on the scale and see what the ADC output is and want to come up with the scaling parameters that are needed to be able to display the correct value. To repeat what is apparently an inconvenient question, let's take a real simple example. I have a 3-bit ADC. The input signal is adjusted so that the ADC will output 000 when the input is -4.0V (i.e., between -4.5V and -3.5V) and will output 111 when the input is +3V (i.e., between +2.5V and +3.5V). What is the scale size? By how much does the input signal change for each incremental change in the ADC output? Notice how this is very, very, very similar to the question that was asked by the OP of the other thread - a reading of 0 corresponds to -4V and a reading of 7 corresponds to +3V. What should the scaling equation be? Do we just blindly say, "Oh, it's a 3-bit ADC and therefore the only number we can use must be 2^3 or 8"? Well, that makes the scaling ratio 0.875V/lsb when, clearly, it is 1V/lsb. Now, if you want to insist on using 8 but are also willing to go, "We need to determine what the effective Full Scale range is of our ADC so that we can plug it into the equation that uses 2^N in order to tell us the resolution," then fine, do that. You will do so by going FS = [V(2^N - 1)-V(0)]*[2^N / (2^N - 1)] So, for this example, you'll get FS = [(3V) - (-4V)] * [8/(8-1)] = 8V And then you'll plug that into your formula 1 lsb = FS/2^N = 8V/8 = 1V/lsb Or, you could do exactly what I described and go: 1 lsb = [V(2^N - 1)-V(0)] / (2^N - 1) and get 1 lsb = [(3V) - (-4V)] / (8-1)] = 1V/lsb The two are EXACTLY and MATHEMATICALLY equivalent! #### ErnieM Joined Apr 24, 2011 8,196 As the OP bases his argument on the stated yet untested assumption that "(t)his is just like a PIC ADC," well, you do the math. #### THE_RB Joined Feb 11, 2008 5,438 Let's look at the question that started this round of debate: "Need the formula to convert from a 10 bit ADC value to a voltage to be displayed on an LCD with 5V full scale and 2.5V ref op amp to ADC.." Note that there is no mention of a Vref input to this unspecified ADC. There is only a terse description of some of the external signal conditioning circuitry. ... Agreed, the OP's question there is convoluted! But it does clearly state "10bit ADC value" and "5v full scale" in the same sentence, and since they are the most common ADC setup that seemed quite clear. This /1023 vs /1024 argument has been ongoing for a long time on the forum throughout many threads, and that argument that erupts often tends to derail help that the experts like youself are giving to the OP. If the problems with using /1023 are better understood then people might stop suggesting /1023 solutions for ADC scaling, then other people will not need to point out that using /1023 scaling introduces rounding errors and corrupts the amplitude of the output data. There is no reason NOT to use the correct scaling of /1024, which is faster computationally anyway. #### THE_RB Joined Feb 11, 2008 5,438 ... Which is faster in assembly language? Dividing by a non power of 2, or working with longer binary numbers? I would think it is faster to do assembly division in power of two, as that can be done with right shifts (through carry) and even multibyte divisions are very fast. As for longer "binary numbers" that is totally separate from the decision to use divisions of a power of two or other divisions. The length of any binary numbers used in the calc will depend on the ratio, and what size of integers are needed to get the desired precision. (Thanks too bance, thatoneguy and t06afre for support. ) #### THE_RB Joined Feb 11, 2008 5,438 #### R!f@@ Joined Apr 2, 2009 9,751 Good info. I always wondered why the heck most of you said 1023. When I think it should be 1024. Putting it all together. To get a reading of 0.00v to 5.00v from the PIC ADC can be done using the correct data scaling of all samples, and properly compensated integer rounding on all samples, by the following integer math; Using *x of *500 means we are converting 1024 ADC units to 500 output units (which represent 0-500 ie 0.00v to 5.00v). Is this what you mean by Averaging the ADC to get nore accurate Voltage measurewments #### THE_RB Joined Feb 11, 2008 5,438 No. Averaging the ADC is when you add up a lot of ADC readings together, then do the scaling calc after that to display it. The calculation above is a scaling calc, to turn a 10bit ADC (0-1023) into a properly scaled result 0.00v-5.00v range for displaying. #### djsfantasi Joined Apr 11, 2010 7,922 The_RB said: Putting it all together. To get a reading of 0.00v to 5.00v from the PIC ADC can be done using the correct data scaling of all samples, and properly compensated integer rounding on all samples, by the following integer math; Using *x of *500 means we are converting 1024 ADC units to 500 output units (which represent 0-500 ie 0.00v to 5.00v). I almost understand this equation. Where does the value 500 come from? I would expect a value of 10, which is twice the maximum 5.00 VDC. #### THE_RB Joined Feb 11, 2008 5,438 500 represents the max desired result, the "scaling" factor. So the maximum result of the calc is the integer 500 which would represent 5.00v. #### djsfantasi Joined Apr 11, 2010 7,922 OK! I didn't realize you were using an integer to represent a scaled value (voltage * 100). Exactly what I would expect with that clarification!
{}
## A total variation model based on the strictly convex modification for image denoising.(English)Zbl 1474.94032 Summary: We propose a strictly convex functional in which the regular term consists of the total variation term and an adaptive logarithm based convex modification term. We prove the existence and uniqueness of the minimizer for the proposed variational problem. The existence, uniqueness, and long-time behavior of the solution of the associated evolution system is also established. Finally, we present experimental results to illustrate the effectiveness of the model in noise reduction, and a comparison is made in relation to the more classical methods of the traditional total variation (TV), the Perona-Malik (PM), and the more recent D-$$\alpha$$-PM method. Additional distinction from the other methods is that the parameters, for manual manipulation, in the proposed algorithm are reduced to basically only one. ### MSC: 94A08 Image processing (compression, reconstruction, etc.) in information and communication theory Full Text: ### References: [1] Chang, T.; Kuo, C. C. J., Texture analysis and classification with tree-structured wavelet transform, IEEE Transactions on Image Processing, 2, 4, 429-441, (1993) [2] Scholkmann, F.; Revol, V.; Kaufmann, R.; Baronowski, H.; Kottler, C., A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry, Physics in Medicine and Biology, 59, 6, 1425, (2014) [3] Ma, J.; Plonka, G., Combined curvelet shrinkage and nonlinear anisotropic diffusion, IEEE Transactions on Image Processing, 16, 9, 2198-2206, (2007) [4] Candes, E.; Donoho, D. L., Curvelets: a surprisingly effective nonadaptive representation for objects with edges, (2000) [5] Candès, E. J.; Donoho, D. L., New tight frames of curvelets and optimal representations of objects with piecewise $$C^2$$ singularities, Communications on Pure and Applied Mathematics, 57, 2, 219-266, (2004) · Zbl 1038.94502 [6] Candès, E.; Demanet, L.; Donoho, D.; Ying, L., Fast discrete curvelet transforms, Multiscale Modeling & Simulation, 5, 3, 861-899, (2006) · Zbl 1122.65134 [7] Perona, P.; Malik, J., Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 7, 629-639, (1990) [8] Rudin, L. I.; Osher, S.; Fatemi, E., Nonlinear total variation based noise removal algorithms, Physica D, 60, 1–4, 259-268, (1992) · Zbl 0780.49028 [9] Chan, T. F.; Esedo\=glu, S., Aspects of total variation regularized $$L^1$$ function approximation, SIAM Journal on Applied Mathematics, 65, 5, 1817-1837, (2005) · Zbl 1096.94004 [10] Weickert, J.; Ter Haar Romeny, B. M.; Viergever, M. A., Efficient and reliable schemes for nonlinear diffusion filtering, IEEE Transactions on Image Processing, 7, 3, 398-410, (1998) [11] Guo, Z.; Sun, J.; Zhang, D.; Wu, B., Adaptive Perona-Malik model based on the variable exponent for image denoising, IEEE Transactions on Image Processing, 21, 3, 958-967, (2012) · Zbl 1372.94102 [12] Catté, F.; Lions, P.-L.; Morel, J.-M.; Coll, T., Image selective smoothing and edge detection by nonlinear diffusion, SIAM Journal on Numerical Analysis, 29, 1, 182-193, (1992) · Zbl 0746.65091 [13] Chen, Y.; Rao, M., Minimization problems and associated flows related to weighted $$p$$ energy and total variation, SIAM Journal on Mathematical Analysis, 34, 5, 1084-1104, (2003) · Zbl 1038.49007 [14] Maiseli, B. J.; Liu, Q.; Elisha, O. A.; Gao, H., Adaptive Charbonnier superresolution method with robust edge preservation capabilities, Journal of Electronic Imaging, 22, 4, (2013) [15] Shah, J., Common framework for curve evolution, segmentation and anisotropic diffusion, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR ’96) [16] Barash, D.; Comaniciu, D., A common framework for nonlinear diffusion, adaptive smoothing, bilateral filtering and mean shift, Image and Vision Computing, 22, 1, 73-81, (2004) [17] Koenderink, J. J., The structure of images, Biological Cybernetics, 50, 5, 363-370, (1984) · Zbl 0537.92011 [18] Witkin, A. P., Scale-space filtering [19] Weickert, J., Anisotropic Diffusion in Image Processing, 1, (1998), Stuttgart, Germany: Teubner, Stuttgart, Germany · Zbl 0886.68131 [20] Chen, Y.; Wunderli, T., Adaptive total variation for image restoration in BV space, Journal of Mathematical Analysis and Applications, 272, 1, 117-137, (2002) · Zbl 1020.68104 [21] Chan, T. F.; Shen, J., Mathematical models for local nontexture inpaintings, SIAM Journal on Applied Mathematics, 62, 3, 1019-1043, (2001) · Zbl 1050.68157 [22] Vogel, C. R., Total variation regularization for Ill-posed problems, (1993), Department of Mathematical Sciences, Montana State University [23] Vese, L., Problemes variationnels et EDP pour lA analyse dA images et lA evolution de courbes [Ph.D. thesis], (1996), Universite de Nice Sophia-Antipolis [24] Chen, Y.; Levine, S.; Rao, M., Variable exponent, linear growth functionals in image restoration, SIAM Journal on Applied Mathematics, 66, 4, 1383-1406, (2006) · Zbl 1102.49010 [25] Chan, T.; Marquina, A.; Mulet, P., High-order total variation-based image restoration, SIAM Journal on Scientific Computing, 22, 2, 503-516, (2000) · Zbl 0968.68175 [26] Andreu-Vaillo, F.; Caselles, V.; Mazón, J. M., Parabolic QuasiLinear Equations Minimizing Linear Growth Functionals, 223, (2004), Springer · Zbl 1053.35002 [27] Acar, R.; Vogel, C. R., Analysis of bounded variation penalty methods for ill-posed problems, Inverse Problems, 10, 6, 1217-1229, (1994) · Zbl 0809.35151 [28] Strong, D. M.; Chan, T. F., Spatially and scale adaptive total variation based regularization and anisotropic diffusion in image processing, Diusion in Image Processing, (1996), UCLA Math Department CAM Report, Cite-seer [29] Chambolle, A.; Lions, P.-L., Image recovery via total variation minimization and related problems, Numerische Mathematik, 76, 2, 167-188, (1997) · Zbl 0874.68299 [30] You, Y.-L.; Xu, W.; Tannenbaum, A.; Kaveh, M., Behavioral analysis of anisotropic diffusion in image processing, IEEE Transactions on Image Processing, 5, 11, 1539-1553, (1996) [31] Vese, L., A study in the BV space of a denoising-deblurring variational problem, Applied Mathematics and Optimization, 44, 2, 131-161, (2001) · Zbl 1003.35009 [32] Marquina, A.; Osher, S., Explicit algorithms for a new time dependent model based on level set motion for nonlinear deblurring and noise removal, SIAM Journal on Scientific Computing, 22, 2, 387-405, (2000) · Zbl 0969.65081 [33] Zhou, X., An evolution problem for plastic antiplanar shear, Applied Mathematics and Optimization, 25, 3, 263-285, (1992) · Zbl 0758.73014 [34] Aubert, G.; Kornprobst, P., Mathematical Problems in Image Processing, 147, (2006), New York, NY, USA: Springer, New York, NY, USA · Zbl 1019.94002 [35] Doob, J. L., Measure Theory, 143, (1994), New York, NY, USA: Springer, New York, NY, USA [36] Ambrosio, L.; Fusco, N.; Pallara, D., Functions of Bounded Variation and Free Discontinuity Problems, 254, (2000), Oxford, UK: Clarendon Press, Oxford, UK · Zbl 0957.49001 [37] Niculescu, C. P.; Persson, L.-E., Convex Functions and Their Applications: : A Contemporary Approach, 23, (2006), Springer [38] Renardy, M.; Rogers, R. C., An Introduction to Partial Differential Equations, 13, (2004), Springer · Zbl 1072.35001 [39] Brézis, H., Opérateurs Maximaux Monotones et Semi-Groupes de Contractions dans les Espaces de Hilbert, 50, (1973), North-Holland · Zbl 0252.47055 [40] Durand, S.; Fadili, J.; Nikolova, M., Multiplicative noise removal using $$L^1$$ fidelity on frame coefficients, Journal of Mathematical Imaging and Vision, 36, 3, 201-226, (2010) [41] Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P., Image quality assessment: from error visibility to structural similarity, IEEE Transactions on Image Processing, 13, 4, 600-612, (2004) [42] Ogada, E. A.; Guo, Z.; Wu, B., An alternative variational framework for image denoising, Abstract and Applied Analysis, 2014, (2014) · Zbl 1474.94024 [43] Yu, Y.; Acton, S. T., Speckle reducing anisotropic diffusion, IEEE Transactions on Image Processing, 11, 11, 1260-1270, (2002) This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# I have no idea what 'Normalize'/'Normalization' means Help please? Just started a Quantum Computing course, and I'm getting continuously confused and lost in the class. I don't think the class is "out of my league" as I've been able to ace every physics class I've taken so far, so I don't want to drop the class. That said, I'm getting confused at the most basic concepts in the course. I have no idea what normalizing a quantum state means or how to normalize something. It seems like the professor is just pulling variables out of thin air. For example, say we have 2 qubits like this: $\left|ψ\right\rangle$ = $\alpha_{00}$$\left|00\right\rangle$ + $\alpha_{01}$$\left|01\right\rangle$ + $\alpha_{10}$$\left|10\right\rangle$ + $\alpha_{11}$$\left|11\right\rangle$ How do we determine when the 1st qubit is 0? If we DO measure the 1st qubit to be zero, how then do we determine when the 2nd qubit is also zero? (BTW, this isn't homework. These are concepts I'm desperately trying to understand so I can TRY to do my homework...) For what it's worth, I've never taken an actual Linear Algebra or Probability course. I've only ever been taught what I needed for whatever class needed it. Normalization, in this case, means that the sum of the probabilities for each individual state must equal 1. (α00)^2 + (α01)^2 + (α10)^2 + (α11)^2 = 1 Normalization, in this case, means that the sum of the probabilities for each individual state must equal 1. (α00)^2 + (α01)^2 + (α10)^2 + (α11)^2 = 1 For example, using the same 2-qubit system as before, how would we generate |ψ,after>? In regards to this question, normalizing shouldn't be thought of as a verb, but rather think of it as a checking system to make sure all your probabilities add to one. After you measure something you collapse the system from its linear superposition. So the system will be in any one state afterwards. Now because of this, you know that the other states are impossible because you just measured the system to have a particular state. So when you ask what is |ψ,after>, I guess you can write |ψ,after> = 1α00|00⟩ . If 00 is the state you measure, and the other probability amplitudes will become 0. That's my two cents. In regards to this question, normalizing shouldn't be thought of as a verb, but rather think of it as a checking system to make sure all your probabilities add to one. After you measure something you collapse the system from its linear superposition. So the system will be in any one state afterwards. Now because of this, you know that the other states are impossible because you just measured the system to have a particular state. So when you ask what is |ψ,after>, I guess you can write |ψ,after> = 1α00|00⟩ . If 00 is the state you measure, and the other probability amplitudes will become 0. That's my two cents. Ok, I understand that. However, in class, we did something like this: Say we have the same 2-qubit state we've been talking about, $\left|ψ\right\rangle$ = $\alpha_{00}$$\left|00\right\rangle$ + $\alpha_{01}$$\left|01\right\rangle$ + $\alpha_{10}$$\left|10\right\rangle$ + $\alpha_{11}$$\left|11\right\rangle$. If we measure the first qubit to be zero, then the function becomes, $\left|ψ\right\rangle = \frac{\alpha_{00} \left|00\right\rangle + \alpha_{01} \left|01\right\rangle}{\sqrt{\left|\alpha_{00}^{2}\right| + \left|\alpha_{01}^{2}\right|}}$, where the denominator is the normalization factor. How does one mathematically determine the normalization factor for any state? Well intuitively you can see that the denominator is equal to the square root of the probability of having the first qubit be zero. As for the "why" is that the normalization factor, I do not know. Well intuitively you can see that the denominator is equal to the square root of the probability of having the first qubit be zero. As for the "why" is that the normalization factor, I do not know. Ok, thanks. Yeah, I can see and reproduce the pattern, but I was wondering "why". No worries, though; I appreciate your help. Have you taken a probability course before? This is actually quite similar to Bayes's theory except that in QM the prob. density is the square of a prob. amplitude. In QM $|\langle\Psi|\Psi\rangle|^2$ must equal 1 because that function means 'the probability that $\Psi$ is in the state $\Psi$'. In any system where you are calculating probabilities before and after gaining more evidence, you have to normalize the set of probabilities so that the sum equals 1. It would be silly if there were more than 100% chance that a measurement would be one of those states. It would also be illogical if there were less than 100% chance that some value would be found. That's not physics, it is just a part of the definition of probability. A measurement of the first qubit that shows it to be in the state $|0\rangle$ means that only the two state kets that you wrote are options. So it must be in a superposition of the two base kets and one would assume that $\frac{a_{00}}{a_{01}}$before$=\frac{a_{00}}{a_{01}}$after (in reality this isn't true because it is difficult to make a perfect measurement, but if we are assume that we have the best tools that nature allows, this is true). That and the fact that the probabilities sum to 1 means that the normalization factor must be $\frac{1} {\sqrt{|a_{00}|^2+|a_{01}|^2}}$. Every time you see a quantum state written out, it is assumed that there is a normalization factor. But the initial state is always just assumed to be normalized because you can just absorb the factor into the $a_n$. You only need to explicitly write one when the coefficients have already been defined. ps. are you a physics major or a compsci major? Have you taken a QM course before? The class may not be out of the league of what you can understand, but if you don't have much experience with QM it will be rough. What text are you using? I'm actually an Engineering Physics major. Never taken a QM or probability course, either. Currently a junior. We're using "Quantum Computation and Quantum Information" by Nielsen and Chuang. I understood mostly everything in your first post. However, it seems like you just took a leap to get to the normalization factor. I feel dumb, but I still just don't understand how you got it... The probability that the state will be found in $|00\rangle$ is $|\langle00|\Psi\rangle|^2=\alpha_{00}$. That is a postulate of QM. After the measurement of the first qubit you have a new state that is a superposition of $|00\rangle$ and $|01\rangle$. If you measure the second qubit (and lets assume that you do it quickly enough that we can assume the state has not evolved) it will be one of those with probability $|\alpha_{00}|^2$ and $|\alpha_{01}|^2$ respectively. Those two values cannot sum to one since we know that before the first measurement they were only 2 of the 4 options (unless the other probabilities were zero, but then this is a silly example, so lets assume that each base ket had a non-zero probability). However, for this second measurement, you will definitely find the second qubit in one of the two states. There is a 100% chance that it will be 0 or 1. It is equivalent to say there is a probability of 1. The factor, $A$, that makes $A^2|\langle\Psi_2|\Psi_2\rangle|^2=$$A^2\left(|\alpha_{00}|^2+|\alpha_{01}|^2\right)=1$ is $\frac{1}{\sqrt{|\alpha_{00}|^2+|\alpha_{01}|^2}}$. The thing that you need to realize is that the alphas (tex is giving me trouble) don't mean anything on their own. They are just the ratios for the kets. I'm going to bed, but I'll go into more detail if you have specific questions tomorrow. Also, if this were you 3rd QM class and you had been studying linear algebra and prob. for years, then you might have reason to feel dumb. As it is, I would just be sure to read up on these things. This is the website for one of my prof.s http://www.quantum.umb.edu/Jacobs/books.html The chapters in the measurement book have not been edited, so the reading might be dense at points, but I think it covers everything you need to know about basic prob. and QM measurement. You might also look into the book by Raymond LaFlamme and others. If you happen to find a cheap copy, it gives better background for a beginner. Also, I think that Nielsen covers QM in more depth later in the book. I haven't read it, but I've skimmed it a few times. The probability that the state will be found in $|00\rangle$ is $|\langle00|\Psi\rangle|^2=\alpha_{00}$. That is a postulate of QM. After the measurement of the first qubit you have a new state that is a superposition of $|00\rangle$ and $|01\rangle$. If you measure the second qubit (and lets assume that you do it quickly enough that we can assume the state has not evolved) it will be one of those with probability $|\alpha_{00}|^2$ and $|\alpha_{01}|^2$ respectively. Those two values cannot sum to one since we know that before the first measurement they were only 2 of the 4 options (unless the other probabilities were zero, but then this is a silly example, so lets assume that each base ket had a non-zero probability). However, for this second measurement, you will definitely find the second qubit in one of the two states. There is a 100% chance that it will be 0 or 1. It is equivalent to say there is a probability of 1. The factor, $A$, that makes $A^2|\langle\Psi_2|\Psi_2\rangle|^2=$$A^2\left(|\alpha_{00}|^2+|\alpha_{01}|^2\right)=1$ is $\frac{1}{\sqrt{|\alpha_{00}|^2+|\alpha_{01}|^2}}$. The thing that you need to realize is that the alphas (tex is giving me trouble) don't mean anything on their own. They are just the ratios for the kets. I'm going to bed, but I'll go into more detail if you have specific questions tomorrow. Also, if this were you 3rd QM class and you had been studying linear algebra and prob. for years, then you might have reason to feel dumb. As it is, I would just be sure to read up on these things. This is the website for one of my prof.s http://www.quantum.umb.edu/Jacobs/books.html The chapters in the measurement book have not been edited, so the reading might be dense at points, but I think it covers everything you need to know about basic prob. and QM measurement. You might also look into the book by Raymond LaFlamme and others. If you happen to find a cheap copy, it gives better background for a beginner. Also, I think that Nielsen covers QM in more depth later in the book. I haven't read it, but I've skimmed it a few times. Ok thanks, it's starting to piece together a little better now. I'll definitely give that book a read. So we cant say say $|\alpha_{00}|^2$ and $|\alpha_{01}|^2$ sum up to equal 1 since they were only 2 of the 4 options in the original function. But we CAN say that for the second measurement, they do sum up to 1, but with a term that preserves normalization, correct?
{}
Synopsis # Electrons and Water Molecules Form a Pulsating Cluster Physics 14, s29 In water, single electrons can cluster with water molecules to form a quasiparticle that oscillates in size, a behavior that could influence the equilibration speed of chemical reactions in the system. When a free electron in water interacts with neighboring water molecules, it can form a quasiparticle known as a “solvated” electron. How these solvated electrons behave provides fundamental insights for charge transport and chemical reactions. Now, Michael Wörner of the Max Born Institute in Germany and colleagues have observed solvated electrons in water inducing previously unseen terahertz-scale oscillations in the water’s polarization [1]. These oscillations may play an important role in how a chemical reaction approaches equilibrium. To produce solvated electrons, the researchers applied pulses of terahertz and near-infrared radiation to a $50\text{-}𝜇\text{m}$-wide jet of water. The radiation stripped some water molecules of electrons. These electrons moved a short distance before localizing because of interactions with other water molecules. To understand the liquid’s properties when that happened, the researchers used terahertz light to monitor polarizability—which relates to how easily the liquid developed a dipole moment in response to an electric field. The researchers observed the liquid’s polarization oscillate in magnitude. The oscillations lasted tens of picoseconds and had a frequency of between 0.2 to 1.5 THz, with the higher frequencies occurring for higher electron concentrations. Using a previously developed theoretical model, Wörner and colleagues determined that this oscillatory response arose from the quantized motion of the solvated electrons. The researchers explained the generation of oscillations as follows: When an electron initially detaches from its water molecule, the electron has a high kinetic energy. That energy decreases as the electron interacts with other water molecules. After losing a certain amount of energy, the electron interacts with nearby water molecules, whose dipole moments point toward the electron, forming a solvated electron. This quasiparticle oscillates in size as it gains the electron’s remaining energy, leading to the terahertz-duration oscillations that the team measured. –Sophia Chen Sophia Chen is a freelance science writer based in Columbus, Ohio. ## References 1. A. Ghalgaoui et al., “Terahertz polaron oscillations of electrons solvated in liquid water,” Phys. Rev. Lett. 126, 097401 (2021). ## Subject Areas Materials Science ## Related Articles Materials Science ### Three-In-One X-Ray Imaging Researchers have developed a technique for simultaneously monitoring the attenuation, phase shift, and dark-field scattering of an x-ray beam as it passes through a melting metal powder. Read More » Condensed Matter Physics ### Observing Iron Under Pressure Femtosecond-resolved x-ray diffraction images of iron’s crystals as they deform under an extreme load show that the material’s elastic-plastic transition comes after a surprisingly long elastic phase.   Read More » Soft Matter ### Inducing a Curl with a Stretch Patterning grooves into the surface of an elastic ribbon can cause the ribbon to curl into a tube shape when it is stretched. Read More »
{}
Transformers documentation SwitchTransformers You are viewing v4.26.0 version. A newer version v4.27.2 is available. Join the Hugging Face community to get started # SwitchTransformers ## Overview The SwitchTransformers model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, Noam Shazeer. The Switch Transformer model uses a sparse T5 encoder-decoder architecure, where the MLP are replaced by a Mixture of Experts (MoE). A routing mechanism (top 1 in this case) associates each token to one of the expert, where each expert is a dense MLP. While switch transformers have a lot more weights than their equivalent dense models, the sparsity allows better scaling and better finetuning performance at scale. During a forward pass, only a fraction of the weights are used. The routing mecanism allows the model to select relevant weights on the fly which increases the model capacity without increasing the number of operations. The abstract from the paper is the following: In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model — with outrageous numbers of parameters — but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability — we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the “Colossal Clean Crawled Corpus” and achieve a 4x speedup over the T5-XXL model. Tips: • SwitchTransformers uses the T5Tokenizer, which can be loaded directly from each model’s repository. • The released weights are pretrained on English Masked Language Modeling task, and should be finetuned. This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here. ## SwitchTransformersConfig ### class transformers.SwitchTransformersConfig < > ( vocab_size = 32128 d_model = 768 d_kv = 64 d_ff = 2048 expert_capacity = 64 num_layers = 12 num_sparse_encoder_layers = 3 num_decoder_layers = 12 num_sparse_decoder_layers = 3 num_heads = 12 num_experts = 8 router_type = 'tokens_masked' router_bias = False router_jitter_noise = 0.01 router_dtype = 'float32' router_ignore_padding_tokens = False relative_attention_num_buckets = 32 relative_attention_max_distance = 128 dropout_rate = 0.1 layer_norm_epsilon = 1e-06 router_z_loss_coef = 0.001 router_aux_loss_coef = 0.001 initializer_factor = 1.0 feed_forward_proj = 'relu' is_encoder_decoder = True add_router_probs = False use_cache = True pad_token_id = 0 eos_token_id = 1 **kwargs ) Parameters • vocab_size (int, optional, defaults to 32128) — Vocabulary size of the SwitchTransformers model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling SwitchTransformersModel. • d_model (int, optional, defaults to 512) — Size of the encoder layers and the pooler layer. • d_kv (int, optional, defaults to 64) — Size of the key, query, value projections per attention head. d_kv has to be equal to d_model // num_heads. • d_ff (int, optional, defaults to 2048) — Size of the intermediate feed forward layer in each SwitchTransformersBlock. • expert_capacity (int, optional, defaults to 64) — Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular Transformer. • num_layers (int, optional, defaults to 12) — Number of dense hidden layers in the Transformer encoder layer. • num_sparse_encoder_layers (int, optional, defaults to 6) — Number of sparse (MoE) dense hidden layers in the Transformer encoder layer. • num_decoder_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer decoder. Will use the same value as num_layers if not set. • num_sparse_decoder_layers (int, optional, defaults to 12) — Number of sparse (MoE) dense hidden layers in the Transformer decoder layer. • num_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. • num_experts (int, optional, defaults to 8) — Number of experts for each SwitchTransformer layer. • router_type (str, optional, defaults to "tokens_masked") — Router type - choose between "tokens_masked", “tokens_scatter”and“experts_masked”. • router_bias (bool, optional, defaults to True) — Whether to add a bias to the router. • router_jitter_noise (float, optional, defaults to 0.1) — Amount of noise to add to the router. • router_dtype (str, optional, default to "float32") — The dtype used for the routers. It is preferable to keep the dtype to "float32" as specified in the selective precision discussion in the paper. • router_ignore_padding_tokens (bool, optional, defaults to False) — Whether to ignore padding tokens when routing. • relative_attention_num_buckets (int, optional, defaults to 32) — The number of buckets to use for each attention layer. • relative_attention_max_distance (int, optional, defaults to 128) — The maximum distance of the longer sequences for the bucket separation. • dropout_rate (float, optional, defaults to 0.1) — The ratio for all dropout layers. • layer_norm_eps (float, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. • router_z_loss_coef (float, optional, defaults to 0.001) — The z loss factor for the total loss. • router_aux_loss_coef (float, optional, defaults to 0.001) — The aux loss factor for the total loss. • initializer_factor (float, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). • feed_forward_proj (string, optional, defaults to "relu") — Type of feed forward layer to be used. Should be one of "relu" or "gated-gelu". SwitchTransformersv1.1 uses the "gated-gelu" feed forward projection. Original SwitchTransformers uses "relu". • add_router_probs (bool, optional, defaults to False) — Whether to output router probabilities to compute router auxiliary loss. • use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a SwitchTransformersModel. It is used to instantiate a SwitchTransformers model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the SwitchTransformers google/switch-base-8 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. ## SwitchTransformersTop1Router ### class transformers.SwitchTransformersTop1Router < > ( config: SwitchTransformersConfig ) Router using tokens choose top-1 experts assignment. This router uses the same mechanism as in Switch Transformer (https://arxiv.org/abs/2101.03961) and V-MoE (https://arxiv.org/abs/2106.05974): tokens choose their top experts. Items are sorted by router_probs and then routed to their choice of expert until the expert’s expert_capacity is reached. There is no guarantee that each token is processed by an expert, or that each expert receives at least one token. #### _compute_router_probabilities < > ( hidden_states: Tensor ) router_probabilities (torch.Tensor) Parameters • hidden_states (torch.Tensor) — (batch_size, sequence_length, hidden_dim) from which router probabilities are computed. Returns router_probabilities (torch.Tensor) Tensor of shape (batch_size, sequence_length, num_experts) corresponding to the probabilities for each token and expert. Used for routing tokens to experts. router_logits (torch.Tensor): Logits tensor of shape (batch_size, sequence_length, num_experts) corresponding to raw router logits. This is used later for computing router z-loss. Computes router probabilities from input hidden states. #### forward < > ( hidden_states: Tensor ) Parameters • hidden_states (torch.Tensor) — [num_groups, tokens_per_group, hidden_dim] inputs to send to experts. Generic forward function for every Router class. Each Router expects to have the same input hidden states (hidden_states) corresponding to the hidden states for each token, the expert_capacity corresponding to the number of tokens the Router will send to each expert, some Routers can send up to few tokens to each expert. Each Router works as the following: it expects the hidden states for each token, gets the router_probs and router_logits from the router_weights. This will assign for each token, the raw probability to be assigned to an expert. Then each Router class will have to define its own _compute_routing_instructions. < > ( config: SwitchTransformersConfig expert_class: Module = <class 'transformers.models.switch_transformers.modeling_switch_transformers.SwitchTransformersDenseActDense'> ) Implementation of the Switch Transformers Sparse MLP module. #### forward < > ( hidden_states ) Hold on, this will be slightly tricky to understand In the correct order, a MoE layer does the following: 1- Gets the router_mask from the router. The shape of the mask is (batch_size, sequence_length, num_expert) and corresponds to the argmax of the router_probs. The probabilities are needed in the computation of the hidden states : they are broadcasted to the hidden states values (can be interpreted as a scaling factor). 2- Dispatch the tokens to its associated experts. We do a classic for loop over the experts and assign for each expert the corresponding hidden states. ## SwitchTransformersModel ### class transformers.SwitchTransformersModel < > ( config: SwitchTransformersConfig ) Parameters • config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare SWITCH_TRANSFORMERS Model transformer outputting raw hidden-states without any specific head on top. The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, and Noam Shazeer. It’s an encoder-decoder T5-like model with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward < > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor) Parameters • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for detail. What are input IDs? To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS Training. • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: • 1 for tokens that are not masked, • 0 for tokens that are masked. • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? SWITCH_TRANSFORMERS uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). To know more on how to prepare decoder_input_ids for pretraining take a look at SWITCH_TRANSFORMERS Training. • decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]: • decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]: • cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. • output_router_logits (bool, optional) — Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.Seq2SeqMoEModelOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.Seq2SeqMoEModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SwitchTransformersConfig) and inputs. • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models. • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the encoder model, useful to compute the auxiliary loss and the z_loss for the sparse modules. The SwitchTransformersModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, SwitchTransformersModel >>> input_ids = tokenizer( ... "Studies have been shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 >>> # preprocess: Prepend decoder_input_ids with start token which is pad token for SwitchTransformersModel. >>> # This is not needed for torch's SwitchTransformersForConditionalGeneration as it does this internally using labels arg. >>> decoder_input_ids = model._shift_right(decoder_input_ids) >>> # forward pass >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state ## SwitchTransformersForConditionalGeneration ### class transformers.SwitchTransformersForConditionalGeneration < > ( config: SwitchTransformersConfig ) Parameters • config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. SWITCH_TRANSFORMERS Model with a language modeling head on top. The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, and Noam Shazeer. It’s an encoder-decoder T5-like model with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward < > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None decoder_head_mask: typing.Optional[torch.FloatTensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = True return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor) Parameters • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for detail. What are input IDs? To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS Training. • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: • 1 for tokens that are not masked, • 0 for tokens that are masked. • decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? SWITCH_TRANSFORMERS uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). To know more on how to prepare decoder_input_ids for pretraining take a look at SWITCH_TRANSFORMERS Training. • decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in [0, 1]: • decoder_head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules in the decoder. Mask values selected in [0, 1]: • cross_attn_head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: • encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size) is a sequence of hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. • past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. • decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. • use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. • output_router_logits (bool, optional) — Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. • labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] Returns transformers.modeling_outputs.Seq2SeqMoEOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.Seq2SeqMoEOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SwitchTransformersConfig) and inputs. • loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. • logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). • past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. • decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. • decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • decoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models. • cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. • encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. • encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. • encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. • encoder_router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the encoder model, useful to compute the auxiliary loss and z_loss for Mixture of Experts models. The SwitchTransformersForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, SwitchTransformersForConditionalGeneration >>> # training >>> input_ids = tokenizer("The <extra_id_0> walks in <extra_id_1> park", return_tensors="pt").input_ids >>> labels = tokenizer("<extra_id_0> cute dog <extra_id_1> the <extra_id_2>", return_tensors="pt").input_ids >>> outputs = model(input_ids=input_ids, labels=labels) >>> loss = outputs.loss >>> logits = outputs.logits >>> # inference >>> input_ids = tokenizer( ... "summarize: studies have shown that owning a dog is good for you", return_tensors="pt" ... ).input_ids # Batch size 1 >>> outputs = model.generate(input_ids) >>> # . To, let’s say you have a dog. To summarize: >>> # Since the model has been trained on MLM, this will output gibberish ## SwitchTransformersEncoderModel ### class transformers.SwitchTransformersEncoderModel < > ( config: SwitchTransformersConfig ) Parameters • config (SwitchTransformersConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare SWITCH_TRANSFORMERS Model transformer outputting encoder’s raw hidden-states without any specific head on top. The SWITCH_TRANSFORMERS model was proposed in Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity by William Fedus, Barret Zoph, and Noam Shazeer. It’s an encoder-decoder T5-like model with sparse Feed Forward that stands for Mixture of Experts (MoE) architecture. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. #### forward < > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = True return_dict: typing.Optional[bool] = None ) transformers.modeling_outputs.MoEModelOutput or tuple(torch.FloatTensor) Parameters • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. SWITCH_TRANSFORMERS is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for detail. To know more on how to prepare input_ids for pretraining take a look a SWITCH_TRANSFORMERS Training. • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: • 1 for tokens that are not masked, • 0 for tokens that are masked. • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. • output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. • output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. • output_router_logits (bool, optional) — Whether or not to return the logits of all the routers. They are useful for computing the router loss, and should not be returned during inference. • return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.MoEModelOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.MoEModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (SwitchTransformersConfig) and inputs. • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. • router_probs (tuple(torch.FloatTensor), optional, returned when output_router_probs=True and config.add_router_probs=True is passed or when config.output_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Raw router probabilities that are computed by MoE routers, these terms are used to compute the auxiliary loss and the z_loss for Mixture of Experts models. The SwitchTransformersEncoderModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, SwitchTransformersEncoderModel >>> last_hidden_states = outputs.last_hidden_state`
{}
## What Is Net Foreign Factor Income (NFFI)? Net foreign factor income (NFFI) is the difference between a nation’s gross national product (GNP) and its gross domestic product (GDP). ### Key Takeaways • Net foreign factor income (NFFI) is the difference between a nation’s gross national product (GNP) and gross domestic product (GDP). • NFFI is generally not substantial in most nations since payments earned by citizens and those paid to foreigners more or less offset each other. • NFFI may assume increasing importance in a globalized economy, as people and companies move across international borders more easily than they did in the past. ## Understanding Net Foreign Factor Income (NFFI) NFFI is the difference between the aggregate amount that a country’s citizens and companies earn abroad and the aggregate amount that foreign citizens and overseas companies earn in that country. In mathematical terms: \begin{aligned}&NFFI\ =\ GNP\ - \ GDP\\&GNP=\text{gross national product}\\&GDP=\text{gross domestic product}\end{aligned} The NFFI level is generally not substantial in most nations since payments earned by citizens and those paid to foreigners more or less offset each other. However, NFFI's impact may be significant in smaller nations with substantial foreign investment in relation to their economy and few assets overseas, since their GDP will be quite high compared to GNP. GDP refers to all economic output that occurs domestically or within a nation’s boundaries, regardless of whether a local company or foreign entity owns production. GNP, on the other hand, measures the output from the citizens and companies of a particular nation, regardless of whether they are located within its boundaries or overseas. For example, if a Japanese company has a production facility in the U.S., its output will count toward U.S. GDP and Japan’s GNP. GDP is the most widely accepted measure of economic output, having supplanted GNP around 1990. In making the switch, the Bureau of Economic Analysis (BEA) said GDP provided a more straightforward comparison of other measures of economic activity in the United States and that it would be helpful to have a standard measure of economic output—most other countries at the time had already adopted GDP as their primary measure of production. ## Special Considerations Many economists have questioned how meaningful GNP or GDP is as a measure of a nation's economic well-being since they do not count most unpaid work while counting economic activity that is unproductive or destructive. Several economists still criticize GDP, specifically for providing a somewhat misleading picture of an economy's true health and the well-being of its citizens. This is because GDP does not take into account the profits earned in a nation by overseas companies that are remitted back to foreign investors. If these remitted profits are very large compared with earnings from the nation’s overseas citizens and assets, the NFFI figure will be negative, and GNP will be significantly below GDP. NFFI may assume increasing importance in a globalized economy, as people and companies move across international borders more easily than they did in the past.
{}
Further deep imaging of HR 7329 A (η Tel A) and its brown dwarf companion B @article{Neuhauser2011FurtherDI, title={Further deep imaging of HR 7329 A ($\eta$ Tel A) and its brown dwarf companion B}, author={Ralph Neuhauser and Ch. Ginski and Tobias O. B. Schmidt and M. Mugrauer Astrophysikalisches Institut und Universitats-Sternwarte and Jena and H Germany}, journal={Monthly Notices of the Royal Astronomical Society}, year={2011}, volume={416}, pages={1430-1435} } About 4 arcsec south of the young A0-type star HR 7329, a faint companion candidate was found by Lowrance et al. Its spectral type of M7-8 is consistent with a young brown dwarf companion. Here we report 10 new astrometric imaging observations of the pair HR 7329 A and B, obtained with the Hubble Space Telescope and the Very Large Telescope, aimed at showing a common proper motion with high significance and possible orbital motion of B around A. With 11 yr of epoch difference between the first… Expand Figures and Tables from this paper Characterization of the gaseous companion κ Andromedae b - New Keck and LBTI high-contrast observations Context. We previously reported the direct detection of a low mass companion at a projected separation of 55 2 AU around the B9 type star Andromedae. The properties of the system (mass ratio,Expand New observations of the PZ Tel system, its substellar companion and debris disc⋆ • Physics • 2012 We present follow-up high-contrast imaging data of PZ Tel B, the substellar companion of a solar analogue pre-main-sequence star and member of the approximately 12-Myr-old β Pic moving group. BetweenExpand The near-infrared spectral energy distribution of β Pictoris b A gas giant planet has previously been directly seen orbiting at 8-10 AU within the debris disk of the ~12 Myr old star {\beta} Pictoris. The {\beta} Pictoris system offers the rare opportunity toExpand Astrometric follow-up observations of directly imaged sub-stellar companions to young stars and brown dwarfs The formation of massive planetary or brown dwarf companions at large projected separations from their host star is not yet well understood. In order to put constraints on formation scenarios weExpand THE HAWAII INFRARED PARALLAX PROGRAM. II. YOUNG ULTRACOOL FIELD DWARFS • Physics • 2016 (Abridged) We present a large, uniform analysis of young (~10-150 Myr) ultracool dwarfs, based on new high-precision IR parallaxes for 68 objects. We find that low-gravity (VL-G) late-M and L dwarfsExpand The Gemini NICI Planet-Finding Campaign: The Frequency of Giant Planets around Young B and A Stars We have carried out high contrast imaging of 70 young, nearby B and A stars to search for brown dwarf and planetary companions as part of the Gemini NICI Planet-Finding Campaign. Our surveyExpand A survey of young, nearby, and dusty stars conducted to understand the formation of wide-orbit giant planets - VLT/NaCo adaptive optics thermal and angular differential imaging Context. Over the past decade, direct imaging has confirmed the existence of substellar companions on wide orbits from their parent stars. To understand the formation and evolution mechanisms ofExpand Direct Imaging of Extra-solar Planets - Homogeneous Comparison of Detected Planets and Candidates • Physics • 2012 Planets orbiting stars other than the Sun are called extra-solar planets or exo-planets. Since about 1989, several hundred such objects were detected using various techniques. The first companionsExpand FURTHER EVIDENCE OF THE PLANETARY NATURE OF HD 95086 b FROM GEMINI/NICI H-BAND DATA We present our analysis of the Gemini/NICI H-band data of HD 95086, following the discovery of the planet HD 95086 b in L'. The H-band data reach a contrast of 12.7 mag relative to the host star atExpand Imaged substellar companions: not as eccentric as they appear? The effect of an unseen inner mass on derived orbits • Physics • 2014 Increasing numbers of sub-stellar companions are now being discovered via direct imaging. Orbital elements for some of these objects have been derived using star{companion astrometry, and several ofExpand References SHOWING 1-10 OF 46 REFERENCES A Candidate Substellar Companion to HR 7329 We present the discovery of a candidate substellar companion from a survey of nearby young stars made with the Near-Infrared Camera and Multiobject Spectrometer coronagraph on the Hubble SpaceExpand Direct detection of a substellar companion to the young nearby star PZ Telescopii • Physics • 2010 Aims. We study the formation of substellar objects (exoplanets and brown dwarfs) that are companions to young nearby stars. Methods. With high-contrast AO imaging obtained with NACO at ESO VLT, weExpand Infrared spectrum and proper motion of the brown dwarf companion of HR 7329 in Tucanae • Physics • 2001 Up to now only four brown dwarf companions to normal stars have been found and confirmed by both spectroscopy and proper motion (namely Gl 229 B, G 196-3 B, Gl 570 D, and CoD$-33 ^{\circ} 7795$ B).Expand Astrometric and photometric monitoring of GQ Lupi and its sub-stellar companion • Physics • 2008 Context. Neuhauser et al. (2005, A&A, 435, L13) presented direct imaging evidence for a sub-stellar companion to the young T Tauri star GQ Lupi. Common proper motion was highly significant, but noExpand An Infrared coronagraphic survey for substellar companions We have used the F160W filter (1.4–1.8 μm) and the coronagraph on the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) on the Hubble Space Telescope to survey 45 single stars with a medianExpand THE GEMINI NICI PLANET-FINDING CAMPAIGN: DISCOVERY OF A CLOSE SUBSTELLAR COMPANION TO THE YOUNG DEBRIS DISK STAR PZ Tel* We report the discovery of a tight substellar companion to the young solar analog PZ Tel, a member of the β Pic moving group observed with high-contrast adaptive optics imaging as part of the GeminiExpand Resolved debris disc emission around η Telescopii: a young solar system or ongoing planet formation? • Physics • 2009 Aims. 60% of the A star members of the 12 Myr oldβ Pictoris moving group (BPMG) show significant excess emissi on in the mid-infrared, several million years after the proto-plane tary disk is thoughtExpand Evidence for a co-moving sub-stellar companion of GQ Lup • Physics • 2005 We present a companion of the ≤2 Myr young classical T Tauri star GQ Lup in the Lupus star forming region at 140 ± 50 pc from imaging, astrometry, and spectroscopy. With direct K-band imaging usingExpand Mid-infrared imaging of brown dwarfs in binary systems • Physics • 2008 Context. Brown dwarfs exhibit complex atmospheric signatures, and their properties are highly sensitive to effective temperature, surface gravity, and metallicity. Several physical properties ofExpand Direct Imaging and Spectroscopy of Substellar Companions Next to Young Nearby Stars in TWA Direct imaging of substellar companions is possible since several years around nearby stars (e.g. Gl 229) and young stars (e.g. TWA-5). We are searching for brown dwarfs and giant planets asExpand
{}
# Monotonicity of infimum of the Willmore energy with prescribed genus Let $$\beta_g:=\inf\{\frac14\int_\Sigma H^2 d\mu \hspace{0.2cm} | \hspace{0.2cm} \Sigma\subset \mathbb R^{3}, \operatorname{genus}(\Sigma)=g \}$$ be the infimum of the Willmore energy of embedded genus-$g$ surfaces. A standard result by Willmore says that $\beta_0=4\pi$ and Marquez-Neves showed that $\beta_1=2\pi^2$ as well as $\beta_g\geq 2\pi^2$ for $g\geq 1$. By estimating the Willmore energy of the Lawson surfaces, one finds $\beta_g<8\pi$ for all $g$. On the other hand, by a result of Kuwert-Li-Schaetzle, one has $\beta_g\to 8\pi$ as $g$ tends to infinity. This suggests that $\beta_g$ is (perhaps strictly) non-decreasing in $g$. I am wondering if this is true? See also Willmore minimizers for genus $\geq 2$ for a related question. • As far as I know, nobody has been able to calculate the Willmore energy for Lawson surfaces of genus bigger than 1, but it is only possible to estimate their energy. – Sebastian Sep 19 '18 at 8:25 • You're completely right, I just edited my post. – user128470 Sep 20 '18 at 15:49
{}
# Graph Interface: Storing Vertices in an Array vs HashTable I have just started learning graph algorithms and am trying to arrive at an ideal interface. I understand that this code will not be used anywhere else (I will certainly use Boost::Graph) but I just want to make sure that what I am writing is not completely wrong. Specifically, my question in the implementation below regards the data structure used to store nodes/vertices and edges. All the other students in the MOC I am enrolled in use an std::vector to represent the array of vertices/nodes and edges in the graph. However, I was wondering if it might be more prudent to use an std::unordered_map (i.e. a hashtable) instead. This is primarily because node removal is something that is expected to happen quite frequently in many graph algorithms (eg. Karger's min-cut) and storing nodes in an std::vector should make these linear time operations. Storing it in an std::unordered_map will allow for removal/insertion/lookup in constant time. The only drawback of such an implementation vs one that stores the nodes as a contiguous array is that finding a random edge/node is necessarily a linear time operation in the hash table implementation while it is a constant time in the array based implementation. Is my reasoning above correct? Any other comments regarding the code is also welcome. namespace algorithms{ /// \struct Graph /// \brief Representation of graph with vertices and edges template <typename ValueType> struct Graph{ /// \brief Constructs a graph with 0 vertices and 0 edges Graph() = default; /// \brief Constructs a graph with n vertices and 0 edges Graph(const int& num_vertices); /// \brief Copy constructor Graph(const Graph& other); /// \brief Creates a new vertex with specified id and returns it /// This is ideal for creating graphs based on adjacency lists /// \note Vertex creation will fail if there is already another vertex /// \return true if vertex creation succeeded, false otherwise bool create_vertex(const int& vertex_id); /// \brief Removes a vertex from this graph /// \note Removal can fail if no vertex exists with specified id /// \return true if vertex removal succeeded, false otherwise bool remove_vertex(const int& vertex_id); /// \brief Adds an edge to the graph /// \return true if edge addition succeeds, false if it fails bool add_edge(const int& first_vertex_id, const int& second_vertex_id); /// \brief Removes an edge from graph bool remove_edge(const int& first_vertex_id, const int& second_vertex_id); /// \brief Returns the number of vertices in graph int get_vertex_count() const; /// \brief Returns the number of edges in graph int get_edge_count() const; /// \brief Fills output vector with the ids of neighbouring vertices to specified one /// \return true if there is a vertex with specified id, false otherwise /// \brief Fills output vector with adjacent vertices along with number of edges between /// specified vertex and the corresponding neighbour /// \param[in] vertex_id id of vertex whose neighbours are sought /// \param[in] consider_directed boolean indicating if edges are directed or not /// \param[out] adjacent_vertices output vector of std::pair(adjacent vertex, num edges between specified vertex and this adjacent one) /// \return true if there is a vertex with specified id, false otherwise /// \brief Returns a random vertex from the graph /// \note This is a function with linear time complexity int get_random_vertex_id(); /// \brief Returns a random edge from the graph, i.e. it sets the two output params /// with the vertex ids of the endpoints of this randomly chosen edge /// \note This is a function with linear time complexity void get_random_edge(int& start_vertex, int& end_vertex); /// \brief Returns whether an edge exists between the two specified vertex ids bool has_edge_between(const int& start_vertex, const int& end_vertex, bool consider_directed_edges_only) const; /// \brief Populates the input vector with the list of edges in it void get_edge_list(std::vector<std::pair<int, int>>& output_edge_list) const; /// \brief Returns number of edges between the two specified vertices int get_num_edges_between(const int& start_vertex, const int& end_vertex, bool consider_directed_edges_only) const; /// \brief Sets value for a particular vertex /// \return true if a vertex exists with specified id, false otherwise bool set_value(const int& vertex_id, const ValueType& value); /// \brief Returns value of a particular vertex /// \return true if a vertex exists with specified id, false otherwise bool get_value(const int& vertex_id, ValueType& output) const; /// \brief Overloads the stream operator to print this graph out template<typename VType> friend std::ostream& operator<<(std::ostream& os, const Graph<VType>& dt); private: /// \struct GraphVertex /// \brief A single vertex in a Graph struct GraphVertex{ /// \brief Id uniquely identifying this vertex int vertex_id; /// \brief Vector holding ids of adjacent vertices to this one /// \note We use a std::list instead of an std::vector to allow of efficient /// addition and removal of graph vertices /// \brief Value to be assigned to this vertex ValueType value; }; /// \struct PairHash /// \brief Provides a hash for an std::pair template <class T, typename U> struct PairHash{ size_t operator()(const std::pair<T, U> &key) const{ return std::hash<T>()(key.first) ^ std::hash<U>()(key.second); } }; /// \struct PairEqual /// \brief Struct used for equality comparison in unordered maps with /// a pair as its key template <class T, typename U> struct PairEqual{ bool operator()(const std::pair<T, U> &lhs, const std::pair<T, U> &rhs) const{ return lhs.first == rhs.first && lhs.second == rhs.second; } }; /// \struct GraphEdge /// \brief A single edge in a Graph struct GraphEdge{ /// \brief Array of two vertices making up this edge int end_points[2]; }; /// \brief A hash map containing the vertices in this graph refrenced against their ids std::unordered_map<int, std::unique_ptr<GraphVertex>> vertices; /// \brief A hash multimap of all the edges referenced by the ids of each edge's constituent vertices /// The reason for using an unordered_multimap instead of an unordered_map is to allow for multiple parallel edges between two vertices std::unordered_multimap<std::pair<int, int>, std::unique_ptr<GraphEdge>, PairHash<int, int>, PairEqual<int, int>> edges; /// \brief Random number generator std::mt19937 mt(std::random_device()); /// \brief Flag keeping track of whether generator has been seeded bool seeded = false; }; template <typename ValueType> std::ostream& operator<<(std::ostream& os, const algorithms::Graph<ValueType>& graph); } #include <algorithms/graph/graph.tch> #endif // ALGORITHMS_GRAPH • How is finding a random edge/node constant time in an array? Also complexity ignored constant factors and is only meaningful for "a lot" of elements, and sometimes you run out of memory before you get to numbers where it's worth it. – nwp Nov 17 '15 at 8:58 • @nwp Finding an edge is constant time in an array because once you generate a random index 'r' in [0, array_size), you can jump to the r'th index in constant time. I don't understand what you mean by: "Also complexity ignored constant factors and is only meaningful for "a lot" of elements", nor in what context you say that: "sometimes you run out of memory before you get to numbers where it's worth it." Nov 17 '15 at 10:03 • Don't assume that O(1) is faster than O(n). You should measure that based on your data set size. There is a lot of hidden complexity in unordered_map. Also remember that vector gains a lot of sped from locality and hardware caching. Nov 17 '15 at 16:51 • Also the removal of a node does not nesacerily mean moving all the other nodes in the vector. Actually I would think the exact opposite. As the position of the node is probably its identifying feature. Removal of a node would simply be marking it as dead an O(1) operation. Nov 17 '15 at 16:55 # In general your reasoning is ok I think it is very valid to try to optimize the time complexity for certain operations if you do not know the size of the structures beforehand. That said, of course it helps to have actual use cases for performance comparisons. # Provide the full code It's hard to provide feedback without the actual implementation of your methods. For instance I would think your implementation of get_adjacent_vertices is probably inefficient based on your data structures. 1. Your documentation is out of sync with the code. That is very bad, worse than no documentation. ("Creates a new vertex with specified id and returns it", "We use a std::list instead of an std::vector") 2. Your documentation is missing the most important information: What is your actual graph? directed? multiple edges? values at vertices/edges? 3. It should always be clear for all methods what failure conditions are. Also make clear when failure doesn't mean exception but return flags. # Vertex "type" What actually represents a vertex in your graph? Do you really want to have separate ids for vertices? Or is a vertex uniquely identified by it's value? In any case you should use a vertex_type to identify your vertices in method paramters, similar to how std::vector::size_type is used. # Return by value or reference instead of by input reference For instance your get_value always requires an unnecessary and potentially expensive copy operation (from the internal GraphVertex::value to the outside variable provided by the caller through the reference). Consider implementing ValueType& operator[](int vertex_id) and const ValueType& operator[](int vertex_id) const instead of get_value and set_value. This is also more consistent with common interfaces. For example in your current get_edge_list - what happens if there are already elements in the provided vector? It complicates things and is more difficult to use. It requires to user to define a variable first and then passing it into your methods rather than defining it directly from a return value. If you have multiple return values use a std::pair or GraphEdge. # Take ints by value instead of const& If you pass small types that are cheap to copy, it is preferred to take them by value. Note: if you do not know the type in generic code, a const& is just fine. # Allow for in-place construction Currently a user has to first create a vertex and then assign it a value in a second method call. This is complicated and error prone. Instead allow for in-place construction (emplace) or at least copy construction of new vertices. # GraphEdge There are a number of things wrong with GraphEdge. 1. You don't use it consistently. Sometimes you use std::pair instead 2. Don't use an array there. Seriously. Why would you introduce so many ways to shoot yourself in the foot without need. # PairHash Your PairHash implementation is bad, because it means hash({a,b}) == hash({b,a}) (but not {a,b} == {b,a}). # PairEqual There are already operator== and friends for std::pair. # Avoid inconsistent state Instead of keeping track if your rng is seeded, seed it in the constructor. I find some of your names to be too verbose and far away from common naming schemes (stl, boost). Especially get_.... For instance get_num_edges_between. I would think count_edges(vertex_type, vertex_type, directed) is just fine. Why do you manage the GraphEdge by unique_ptr inside edges? This has negative performance impact. Use such an indirection only if you need to move the objects around often but cannot cheaply do so. The same argument applies to vertices, but it requires more careful reasoning because the GraphVertex object is more complex. Is it required to often move around GraphVertexes in std::unordered_map operations? (I don't think so) Is it expensive to move a GraphVertex object? (Depends on ValueType.)
{}
SAT数学备考练习中,大家面对各种各样的真题练习,也感觉比较头大,其实如果大家的备考时间紧张,又想快速的提高分数,那么这个时候。我们最好的选择就是利用可汗学院的题目进行练习,毕竟这些题目更加有权威性,对于大家的备考帮助更多。下面小编为大家整理了详细的内容,供大家参考! 18.V=2.5(6.4?t) Salma is filling her water jugs with spring water at the supermarket. The above equation gives the empty space in her jugs, V, in US gallons, after t minutes at a constant fill rate. How many minutes will it take to fill the jugs? Correct answer: 6.4 Difficulty level: 2 19.T=0.25q+87 Manon's statistics professor puts a bonus question on every test, which adds 0.25 points to a student's overall grade at the end of the term. The above equation gives Manon's current overall grade, T, after taking into account q extra credit questions answered correctly. What does the 87 mean in this equation? A.Manon must get an 87 or above to pass. B.Manon's average is an 87 before adding the extra credit. C.After adding in her extra credit, Manon’s average is an 87. D.Manon's professor gave 87 opportunities for extra credit this term. Correct answer: B Difficulty level: 2 20.T=10m+40 Tim decides to cook a steak. The interior temperature, T, of the steak, in degrees Fahrenheit(℉), after cooking for m minutes is given in the equation above. What does the 10 mean in the equation? A.The interior temperature is 10℉when Tim starts cooking the steak. B.The interior temperature of the steak increases by 10℉ for every minute it is cooked. C.The interior temperature of the steak decreases by 10℉ for every minute it is cooked. D.The interior temperature of the steak will increase a total of 10℉ while being cooked. Correct answer: B Difficulty level: 2 21.5.5B+4R=28 The above equation models the cost if Amit picks B pounds of blueberries and R pounds of raspberries at a farm where blueberries cost $5.50 per pound and raspberries cost$ 4.00 per pound. According to the equation, how much does Amit spend in total on both types of berries? A.$9.50 B.$22.00 C.$28.00 D.$56.00 Correct answer: C Difficulty level: 2 以上就是关于“SAT数学可汗学院”的内容,希望通过上述内容的学习,大家能够更好的练习SAT数学考试题目,争取拿到更高的分数。 备考推荐:
{}
• 2022年08月21日 • 23 分 The Paper that Keeps Showing Up Let’s talk about one of my favorite cryptography papers. • 2022年08月14日 • 35 分 Some KEMs and Some Proofs In this post, I’d like to provide a technical introduction to key encapsulation mechanisms (KEMs), with a focus on proving the security of various constructions. • 2022年07月16日 • 8 分 Basic Cryptography Without Fluff Many topics in cryptography on this blog so far, but not many basic topics. This post is a crack at providing such an approach. With luck, it should bring utility to unfamiliar folk, but also grins for folk familiar with this art. • 2022年06月26日 • 12 分 On Identifiable Aborts Many cryptographic protocols attempt to satisfy a notion of “identifiable abort”, where if a malicious party causes the protocol to prematurely halt, then they can be detected. In practice, I think that this notion isn’t all that useful. • 2022年05月28日 • 43 分 State-Separable Proofs for the Curious Cryptographer This blog post is an introduction to state-separable proofs, a technique for proving the security of Cryptographic schemes. • 2022年05月14日 • 13 分 Some Cryptography Books I Like This is just a brief post going over a few books on Cryptography I’ve read, and would potentially recommend to people interested in the topic. • 2022年05月01日 • 13 分 Explaining Yao's Garbled Circuits The protocol so fun you have to implement it! Like I did recently. • 2022年04月23日 • 14 分 Canetti et al's Paradoxical Encryption Scheme When proving security, Cryptographers often model hash functions as random oracles, which act like random functions. In practice, hash functions are different from random oracles. The question is: does this difference impact security? • 2022年04月03日 • 5 分 What I've Been Working On: 2022-W13 Secret sharing seed phrases, studying MPC, and Yao’s Garbled Circuits. • 2022年03月20日 • 9 分 Encoding Traits with Go Generics It turns out that Go v1.18’s generics functionality is enough to encode traits, which has many applications. In particular, we could write a generic library for Elliptic Curves, among other things. • 2022年03月13日 • 9 分 What I've Been Working On: 2022-W10 A toy blockchain in Haskell, simulation-based security, and tinkering on Cryptographic foundations. • 2022年03月07日 • 12 分 On Monero's Ring Signatures Monero is a cryptocurrency which claims to be “private, and decentralized”. One of Monero’s main tools towards this privacy is the ring signature. Ring signatures allow you to sign on behalf of a group, without revealing which member of the group you are. They can be constructed as an elegant extension of Schnorr signatures, and aren’t all that hard to understand either. • 2022年02月06日 • 9 分 On the Malleability of ECDSA Signatures The ECDSA signature scheme is quite ubiquitous, used everywhere from TLS to various cryptocurrencies like Bitcoin. Funnily enough, it turns out that it suffers from a few malleability issues, although I doubt these pose a serious issue in practice. • 2021年10月02日 • 6 分 Porting Ludus to the Web Recently, I ported my NES emulator, Ludus to run in the browser, using WASM. You can play around with it here. This post is a brief overview of the interesting aspects in creating this port. Ludus Ludus was an NES emulator that I wrote 3 years ago, back in 2018. I was starting my BSc at EPFL then, so it’s a bit fitting to revisit the project now that I’m starting my MSc. • 2021年09月03日 • 2 分 My Quick Attempt at Bluesky's Satellite Challenge Twitter’s Bluesky initiative created a little challenge where the goal was to verifiably link different digital identities together. This is my attempt at this. • 2021年08月03日 • 5 分 Taproot Signatures and BIP-32 How do Bitcoin’s new Taproot signatures interact with the good old key derivation methods from BIP-32? It turns out that the answer isn’t all that straightforward. • 2021年07月25日 • 9 分 On Multi-Set Hashing Designing a hash function where the order of inputs doesn’t matter is surprisingly easy. • 2021年07月18日 • 9 分 Quantum Computing: Some Analogies I’ve developed a few intuitions about algorithms on quantum computers. This post is an attempt to share them. • 2021年07月10日 • 13 分 Signatures From Identification Schemes It turns out that all you need to make a signature scheme is a way to prove your identity. • 2021年07月05日 • 15 分 Introducing Nuntius Recently, I made a toy E2E encrypted messanger, called Nuntius. I had fun tinkering on it, and thought that some of the cryptography involved would be fun to explain. • 2021年06月20日 • 11 分 End-to-End Encryption in Web Apps End-to-end encryption is a very appealing guarantee of privacy, and more applications want to provide this guarantee. Web applications are popular, and they want to implement this functionality in the browser. What kind of guarantees does a user still have with a web app, served to them dynamically? • 2021年06月06日 • 17 分 Introducing Nimotsu Recently, I’ve been working on a little encryption tool called Nimotsu. My goal with this project was to implement all of the cryptographic primitives involved. I had a lot of fun doing so, and thought it would make for an interesting blog post. • 2021年04月05日 • 23 分 Constant-Time Big Numbers: An Introduction Over the past couple months, I’ve been working on a library for constant-time Big Numbers in Go. I think it’s about time that I presented a bit of this work. • 2021年02月28日 • 9 分 Some Thoughts on Numeric Classes This is just a quick post, crystallizing some of the ideas I’ve had recently about organizing numeric classes in Haskell. • 2021年02月21日 • 6 分 Fractals on The Web Last week, I made a little web application for visualizing some fractals, and I thought I’d write up a few thoughts about how it works. • 2021年02月14日 • 11 分 Spaced Repetition for Mathematics Recently, I’ve been experimenting with using spaced repetition for self-studying “advanced” mathematics. This post goes through my motivations for adopting this system, as well as a few techniques I’ve used in adapting it to mathematics. • 2021年02月05日 • 5 分 Programming Problem: Run-Length Encoding This is just a quick post about a programming problem that’s been circulating around on twitter recently: • 2021年02月02日 • 17 分 Tychonoff's Theorem and Zorn's Lemma Tychonoff’s theorem proves that the product (even infinite) of compact spaces is also compact. The proof makes judicious use of Zorn’s lemma. In fact, it uses it so well, that I gained an appreciation for how fun the lemma can be. • 2021年01月24日 • 6 分 On Strings in Compilers I’ve been thinking recently about handling strings and names in compilers in a more principled way, and I think I’ve come up with a nice way of doing this. • 2021年01月16日 • 6 分 Let's Describe Algorithms Better I was reading a cryptography textbook, and I came across an algorithm that reminded me why I often dislike descriptions of algorithms in textbooks. This is a short post going over this example, and thinking about how to improve these descriptions. • 2021年01月10日 • 13 分 Making an IO Recently I’ve been wrapping up my work on a compiler for a subset of Haskell. While my subset doesn’t (yet) have any support for IO, I’ve been thinking about how to implement it. • 2020年12月31日 • 8 分 This is a post going over the basic concept of defining objects through Universal Properties, in Category Theory, with explanations and examples in Haskell. • 2020年12月28日 • 67 分 In this post, we’ll go over creating a parser for our subset of Haskell. This stage of the compiler is responsible for taking the tokens that our lexer produced in the previous part. • 2020年12月13日 • 12 分 Chinese Remainder Theorem for Programmers This is a quick post about the Chinese Remainder Theorem. Specifically, how to use it to solve a system of system of simple modular equations. • 2020年12月10日 • 61 分 In this post, we’ll go over creating a lexer for our subset of Haskell. This stage of the compiler goes from the raw characters in our a source file, to a more structured stream of tokens. We’ll finally be digging into some serious code this time, so get your editor ready if you’re following along! • 2020年12月07日 • 7 分 (Un)fold as (Co)algebra One cool thing about lists is that they have canonical ways of consuming and producing them: folds, and unfolds. It turns out that these are canonical, in that folding and unfolding functions are themselves isomorphic to lists. In this post, we’ll explore why this is true. • 2020年12月05日 • 4 分 A simple algorithm for UNIX's Tree This morning I decided to implement my own version of UNIX’s classic tree command in Rust. I thought the algorithm was both non-trivial, and interesting enough to warrant a quick blog post. • 2020年11月23日 • 19 分 This is the first “real” post in the Haskell in Haskell series. In this post we’ll go over setting up a basic project in Haskell. We’ll be building our compiler on top of this foundation in the next posts. • 2020年11月10日 • 9 分 On Deep Immutability This post is about the different ways objects can be immutable and mutable in programming languages, why deep immutability is what we really want, and a path towards implementing that in higher level languages. • 2020年11月01日 • 13 分 This is an introduction to a series I’m calling Haskell in Haskell. The goal of this series of posts is to go through the implementation of a subset of the Haskell language, from parsing, to typechecking, to code generation, using Haskell itself. • 2020年10月19日 • 2 分 Always use offsetof If you want to find out how many bytes a certain portion of a struct uses, you might be tempted to do some arithmetic with sizeof, but this will yield unpredictable results! • 2020年10月18日 • 12 分 Lexer Combinators When you write a parser in Haskell, you want to use parser combinators. When you write a lexer, you should use lexer combinators! We’ll see what these are, and how to use them to write a simple lexer. • 2020年10月14日 • 8 分 Monty Hall and Counterfactuals This is about some shower thoughts I had recently about the infamous Monty Hall problem. Namely, how to make sense of the counter-intuitive results involved. We’ll see how reasoning counterfactually can make the best strategy seem a lot clearer. • 2020年10月10日 • 7 分 My Blog: Version 4 Having recently inaugurated the 4th version of this blog, I want to go over the technologies that I used to make this, as well as some of the tricky things I had to do to make things work. • 2020年10月02日 • 9 分 Categorical Graphs This post is a basic introduction to the idea of Categorical Graphs, or just the standard theory of Graphs, developed through the lens of Category Theory, and generalized in the obvious ways that follow through that lens. This is just an introduction mainly because this theory doesn’t seem to have been developed very far yet, and I haven’t been able to develop it that much independently so far. I think I have a good grasp on some very basic ideas here, and wanted to present them while they’re still fresh in my head. • 2020年09月28日 • 8 分 The Web or The Internet? So this is going to be a bit of a Rant on some things I’ve been thinking about recently. To set the background for this post, I’ve been working on a note-taking app with a focus on individual blocks referencing eachother. Kind of like Roam. The idea is that this kind of system can be used for discrete nodes of knowledge, which we can call blocks, for simplicity. Blocks don’t contain much on their own, besides some text, and references to other blocks. • 2020年09月20日 • 5 分 Wrapping GtkImContext So, recently I’ve started working on a text editor, using Gtk, via gtkmm in C++. I’ve decided to try my hand at writing the text area widget from scratch, just using the text layout facilities provided by Gtk’s Pango. My rationale for this is twofold: It seems that Gtk’s TextView widget is missing a few nice things, like source lines, and using the more developed SourceView is not doing a lot of the work myself. • 2020年09月09日 • 10 分 Recursive Types as Initial Algebras Recently (well, more like a month ago), I came across this interesting observation: In my head, I immediately jumped to the notion of Algebras in Category Theory. I had recently studied that notion, found it quite interesting, and was very happy to see this observation, because it was actually quite obvious to me thanks to what I’d recently learned. The goal of this post is to unpack that tweet, and then explain why that observation is true. • 2020年08月30日 • 17 分 Encoding the Naturals In this post, we’ll cover 3 ways I know of encoding the natural numbers $\mathbb{N}$ in your standard functional language with recursive types and polymorphism. At least, these are the 3 most generalized ways of doing it. As we’ll see, some common encodings are just specific cases of a more general encoding. Prerequisites Some familiarity with defining data types in a functional-esque language might be helpful, but shouldn’t be strictly necessary. • 2020年08月17日 • 14 分 Empty vs NonEmpty Groups The usual definition of a Group excludes empty groups by definition. There are alternate definitions of a Group that allow us to include the empty set, and which are quivalent to the normal definition in all other cases. This post explores this alternate definition, and the resulting differences with the normal concept of a Group. What is a Group? A Group is one of the most important structures in abstract algebra. • 2020年06月18日 • 7 分 Monomorphisms vs Epimorphisms The concepts of monomorphism and epimorphism are very important in Category Theory. I always had a hard time remembering which one was which until I thought about a good mnemonic. • 2020年06月10日 • 8 分 Simple WebRTC Video Chat Recently I made an app for making group video calls. The difference between this and something like zoom is that there’s no central server responsible for routing video calls between the participants. Instead, I used WebRTC in order to set up peer-to-peer (P2P) calls between all of the members of a group. • 2020年06月03日 • 25 分 Parsing A Whitespace-Sensitive Language This post is about how to parse programming languages which define blocks using indentation, instead of braces and semicolons. We’ll end up learning how to infer these blocks based on the indentation, using Typescript to parse a toy functional language. • 2020年04月13日 • 6 分 What I like about Roam (so far) I read a lot of stuff, mainly on the internet. One problem I’ve always had is keeping track of the important bits of the things that I read. The most important information sticks with you, but it’s not easy to remember everything. There’s always a nagging fear that there’s some importing bit you might be missing, or that you might forget something useful. • 2020年02月13日 • 15 分 Against Fullstack Data Sharing This is a post about how I work with data in fullstack development. Specifically, I share what I think are good patterns for sharing data and logic between the frontend and the backend of an application. • 2020年01月31日 • 19 分 Review: The New York Trilogy This is a review of The New York Trilogy written by Paul Auster in 1987. As the title says, this book is actually a trilogy of three shorter novels: City of Glass, Ghosts, and The Locked Room. • 2020年01月10日 • 7 分 Integrating Notes and Spaced Repetition So I’ve been tinkering around with better systems for taking notes recently. I’m not a big fan of keeping notes, around because I don’t find myself actually revisiting them, and when I do, I don’t get much value out of it. On the other hand, my experience with language learning has definitely showed me how effective spaced repetition is when it comes to keeping things in your head. • 2020年01月09日 • 5 分 React Pitfalls: useState initialization This is a quick post about a “gotcha” I encountered recently in a React application. This involved the use of React’s useState hook, which had a subtle difference between how I thought the hook worked, and how it actually worked. • 2019年12月28日 • 12 分 Structured Immersion When it comes to language learning, one approach that I really subscribe to is immersion. This means trying to absorb as much of the language as possible, as often as possible. It’s often said that “speaking with locals” is a good way to accelerate the learning process, and there’s a lot of truth with that statement. • 2019年10月27日 • 5 分 Layerability and Abstraction This is a post about one of my favorite aspects of networking: how different protocols and concepts are layered together. I go over the concept of Layerability and how it applies to networking and programming more generally. • 2019年08月31日 • 9 分 Poline This is a post about Poline, a tiny programming language I wrote recently. The main “gimmick” of Poline is a feature called Green Threads. In fact, Poline doesn’t have many other features besides them. • 2019年08月17日 • 8 分 From Interfaces to Traits This is a post about how different languages handle the concept of interfaces. We’ll go over the classical OO way of handling them, with Java, to the more recent approaches of languages like Rust, as well as others in between. • 2019年07月07日 • 5 分 Sentence Banking This is a post about ginkou, a tool I made recently. This tool uses Rust, SQLite, as well as mecab to archive sentences, and then to retrieve them based on the words they contain. • 2019年06月14日 • 6 分 Data Races vs Race Conditions This is a quick post about the difference between Data Races and Race Conditions, and how data structures or patterns providing freedom from data races can fail to provide race condition freedom. • 2019年06月13日 • 3 分 Introducing Ludus This is a short post about a crate I recently published: https://crates.io/crates/ludus. This crate provides the core logic of an NES emulator, and can be used to build independent GUI applications. • 2019年05月14日 • 5 分 The Component Pattern This post details a useful pattern for organizing stateful components in functional code. This post assumes knowledge of Haskell, up to Monad-Transformers. • 2019年05月03日 • 12 分 Bittorrent Is Hard - 1 Having worked on a bittorrent client in Haskell recently, I wanted to write about a few tricky aspects of the implementation that I encountered: managing the concurrent piece downloads, and saving them to files. • 2019年03月06日 • 6 分 Mutability Is a Great Secret To Have When programming in a Functional style, Mutability is often avoided on principle. But what’s more important is having functions be immutable from the outside. In this post we’ll see how to use interior mutation while providing an immutable interface.
{}
MSF Data Analysis 0 0 Entering edit mode 24 months ago olive1212 ▴ 100 I am trying to work with MSF files (and RAW files) downloaded from a manuscript via PRIDE/ProteomeXchange. I am just trying to access the complete proteome (not just peptide sequence) with confident/coverage from the file. Unfortunately, the Thermo Proteome Discoverer Software used to generated the MSF files was version 1.4.1.14, so the version of Proteome Discoverer (2.4.0.305) I am working with needs MSF files in at least version 2.3.0.0 format. I do not have a great deal of expertise in PD so I do not know if there is a way to convert these files. I have tried to converting the MSF files with M2Lite, ProCon, and Thermo MSF Parser with no luck. Does anyone have advice? I greatly appreciate any and all help! MSF Proteomics Thermo Proteome Discoverer • 1.4k views 1 Entering edit mode If you were able to load the MSF file into PD (your screenshot) what other fields not in the screenshot are missing that you need? Are you not able to export or view the complete list of detected proteins from the PD session that's shown in your screenshot? Interested to see what responses the community has for this. PD MSF is a sqlite format file, so the files can be directly opened in sqlite3 (for a PD version 2.0 file): # open a .msf in sqlite3 \$ sqlite3 xxxx_yyy.msf # see all the data tables in the .msf file: sqlite> .tables AminoAcidModifications                PeptideItemTargetEntitys AminoAcidModificationsAminoAcids      PeptideScores <snip>.... # select some file-level results sqlite> select fileid, filename from workflowinputfiles order by fileid asc; .... If you have sqlite3 and grasp of the db schema that might allow you to pull results out. Unfortunately my understanding is that the format has changed significantly with PD versions ~1.x through 2.x and the internals are not documented so it can be difficult to work with. Are the versions of M2Lite, ProCon, and Thermo MSF Parser that you tried matched to your 1.4.1.14 MS file? 0 Entering edit mode Thank you for all the information! When I load the files into PD (screenshot), they do show a list of proteins and it gives the indicated #PSM. However, it does not give any confidence information generated by peculator nor the coverage or #Peptides, which makes the list of proteins fairly useless. I am able to export it as an excel only, but I probable need the confidence results to be able to work with any of the proteins. I cannot export it as mzIdentML, pepXML, or proXML because it says it is missing workflow. I am not familiar with sqlite3 but I appreciate the tip and will pursue that option further!
{}
3 deleted 56 characters in body True: life is nice in the metrizable case. A very good reference is Phelp's book Lectures on the Choquet's theorem (LNM). On the contrary, note that for For a non-metrizable compact convex subset of a locally convex space, extreme points need not even form a Borel set. This has been shown by Bishop-de Leeuw, The representation of linear functionals by measures on sets of extreme points, Ann.Inst. Fourier (Grenoble) (1959) . A very good reference for these topics is Phelp's LNM Lectures on the Choquet's theorem (2001). 2 added 61 characters in body True: life is nice in the metrizable case. A very good reference is Phelp's book Lectures on the Choquet's theorem (LNM). Note On the contrary, note that for a nonmetrizable non-metrizable compact convex subset of a locally convex space, extreme points need not even form a Borel set. This has been shown by Bishop-de Leeuw, The representation of linear functionals by measures on sets of extreme points, Ann.Inst. Fourier (Grenoble) (1959) . 1 A very good reference is Phelp's book Lectures on the Choquet's theorem (LNM). Note that for a nonmetrizable compact convex subset of a locally convex space, extreme points need not even form a Borel set. This has been shown by Bishop-de Leeuw, The representation of linear functionals by measures on sets of extreme points, Ann.Inst. Fourier (Grenoble) (1959) .
{}
# Tagging functional group Is it a standard practice to name functional group by the greek alphabet of the carbon it is attached to? Recently read a question in which the order of translation (protein synthesis) was asked and there were a couple of options with the $\ce{NH2}$ was labelled as $\ce{\alpha-NH2}$ and $\ce{COOH}$ as $\ce{\alpha-COOH}$. I read wiki, it says H atoms are labeled in that way but are functional groups too labeled in the same way? • As you can see, they are. – Mithoron Apr 29 '17 at 15:50 • Definitely not standard though. Standard is nothing but the IUPAC name. These are simply useful notations chemists use in certain chemicals as they have useful properties being that way. – Pritt Balagopal Apr 29 '17 at 16:39
{}
# Algorithms with Constructible Network Architecture ## Set up connections There are three ways to define relations between layers inside the algorithm. We can use predefined networks as a first argument from neupy import algorithms, layers network = layers.join( layers.Input(10), layers.Sigmoid(40), layers.Sigmoid(2), ) network, step=0.2, shuffle_data=True ) Or we can set up a list of layers that define sequential relations between layers. from neupy import algorithms, layers [ layers.Input(10), layers.Sigmoid(40), layers.Sigmoid(2) layers.Softmax(2), ], step=0.2, shuffle_data=True ) This is just a syntax simplification that allows avoiding layer.join function usage. Small networks can be defined with a help of inline operator. from neupy import algorithms from neupy.layers import * Input(10) > Sigmoid(40) > Sigmoid(2), step=0.2, shuffle_data=True ) And the last way to define connections is just to set up value equal to a list or a tuple of integers. from neupy import algorithms It’s obvious that during initialization we didn’t actually set up any layer types. By default NeuPy constructs from the tuple simple MLP network that contains dense layers with sigmoid activation function. This type of initialization typically suitable for tests or model benchmarks. ## Train networks with multiple inputs NeuPy allows to train networks with multiple inputs. from neupy import algorithms, layers [ [[ # 3 categorical inputs layers.Input(3), layers.Embedding(n_unique_categories, 4), layers.Reshape(), ], [ # 17 numerical inputs layers.Input(17), ]], layers.Concatenate(), layers.Relu(16), layers.Sigmoid(1) ], step=0.5, verbose=True, error='binary_crossentropy', ) # Categorical variable should be the first, becuase # categorical input layer was defined first in the network network.train([x_train_cat, x_train_num], y_train, [x_test_cat, x_test_num], y_test, epochs=180) y_predicted = network.predict([x_test_cat, x_test_num]) ## Algorithms NeuPy supports lots of different training algorithms based on the backpropagation. You can check Cheat sheet if you want to learn more about them. Before using these algorithms you must understand that not all of them are suitable for all problems. Some of the methods like Levenberg-Marquardt or Conjugate Gradient work better for small networks and they would be extremely slow for networks with millions of parameters. In addition, it’s important to note that not all algorithms are possible to train with mini-batches. Algorithms like Conjugate Gradient are not able to train properly with mini-batch. ## Error functions NeuPy has many different error functions. from neupy import algorithms, layers [ layers.Input(784), layers.Relu(500), layers.Relu(300), layers.Softmax(10), ], error='categorical_crossentropy', ) Also, it’s possible to create custom error functions. Error function should have two mandatory arguments. import theano.tensor as T from neupy import algorithms, layers def mean_absolute_error(expected, predicted): return T.abs_(expected - predicted).mean() [ layers.Input(784), layers.Relu(500), layers.Relu(300), layers.Softmax(10), ], error=mean_absolute_error, ) Error function should return a scalar because during the training output from the error function will be used as a variable with respect to which we are differentiating Algorithms with constructible architectures allow to use additional update rules for parameter regularization and step update. For instance, we want to add Weight Decay regularization and we want to minimize step monotonically after each epoch. from neupy import algorithms, layers [ layers.Input(784), layers.Relu(500), layers.Relu(300), layers.Softmax(10), ], step=0.1, batch_size=16, algorithms.StepDecay] ) Both WeightDecay and StepDecay algorithms have additional parameters. In case if we need to modify them we can add them to the training algorithm. from neupy import algorithms, layers [ layers.Input(784), layers.Relu(500), layers.Relu(300), layers.Softmax(10), ], step=0.1, batch_size=16, # Parameters from StepDecay reduction_freq=50, # Parameters from WeightDecay decay_rate=0.05, algorithms.StepDecay] ) NeuPy doesn’t allow to use multiple regularizations and step update add-ons for training algorithm. >>> from neupy import algorithms, layers >>> ... [ ... layers.Input(784), ... layers.Relu(500), ... layers.Relu(300), ... layers.Softmax(10), ... ],
{}
# What's the logic behind partial fraction decomposition? 1. Aug 30, 2015 ### A.MHF Ok so I took partial fraction decomposition in Calc II, and now I'm taking it again in Differential Equations course. The problem is that I don't really understand what I'm doing. I understand the procedure when having simple real roots, for example 2x+1/(x+1)(x+2), it becomes A/(x+1) + B/(x+2) Because multiplying the two would get us a common denominator of (x+1)(x+2), which is what we want. But I don't understand why when having repeated roots we have to include all the powers in the expansion? For example: 2x+1/(x+1)^3= A/(x+1)+ B/(x+1)^2 + C/(x+1)^3 Also, when having irreducible factor we have to put (Ax+B) in the numerator instead of just "A". Can someone help me understand what's going on here? Thanks. 2. Aug 31, 2015 ### Orodruin Staff Emeritus It is a matter of having enough constants available to adapt the polynomial you obtain when putting everything with a common denominator to the original polynomial. 3. Aug 31, 2015 ### HallsofIvy Staff Emeritus The logic in writing (2x+ 1)/(x+ 1)(x+ 2) as A/(x+ 1)+ B/(x+ 2) (Note the parentheses. This is NOT "2x+ 1/(x+1)(x+2)" which cannot be written in that form) is the other way: A/(x+ 1)+ B/(x+ 2)= [A(x+ 2)]/(x+1)x+2)+ [B(x+ 1)]/(x+ 1)(x+ 2)= (Ax+ 2A+ Bx+ B)/(x+ 1)(x+ 2)= [(A+ B)x+ (2A+ N)]/(x+1)(x+2) which will be equal to (2+ 1)/(x+1)(x+2) as long as A+ B= 2 and 2A+ B= 1. And, of course, A= -1, B= 3 satisfy tose.
{}
Analysis of the Polymer Properties and Sound Characteristics of Interlayer Films for Laminated Glass Title & Authors Analysis of the Polymer Properties and Sound Characteristics of Interlayer Films for Laminated Glass Ko, Sangwon; Hong, Jiyoung; Sunwoo, Yerim; Kim, Young Jun; Abstract To improve the sound insulation performance of laminated glass in high speed trains, it is beneficial to study the relationship between the characteristics of interlayer films and the acoustical performance. In addition to those of conventional PVB (polyvinyl butyral), the dynamic mechanical properties of PVB derivatives and PC (polycarbonate), which are candidates for interlayer films, were analyzed. We assumed that PVB-HEMU, which has a glass transition temperature ($\small{T_g}$) around room temperature and a large tan $\small{{\delta}}$ (loss tangent) value, can be made to damp efficiently. The damping capability was tested utilizing sound transmission loss measurement and simulation under the identical structure of laminated glass in high speed trains. We also built a database for analysis of relations between interlayer film characteristics and acoustical performance; this was followed by the determination of sound transmission loss using the intensity technique and FEA. Keywords Laminated glass;Interlayer film;Sound transmission loss;Finite element analysis; Language Korean Cited by 1. A Study on the Real-time Optimization Technique for a Train Velocity Profile, Journal of the Korea Academia-Industrial cooperation Society, 2016, 17, 8, 344 References 1. S. Jang, J. Ryue (2013) Study on the rolling noise model using an analysis of wheel and rail vibration characteristics, Journal of the Korean Society for Railway, 16(3), pp. 175-182. 2. K. Kim, J. Park (2001) Interior noise prediction of the Korean high speed train using sound source contribution analysis and sensitivity analysis of wall's transmission loss, Proceedings of the Korean Society for Noise and Vibration Engineering Conference, Yong Pyong, pp. 1093-1098. 3. S. Kim, H. Lee, J. Kim (2012) Sound insulation strategy for the tunnel noise in a high speed train, Journal of the Korean Society for Railway, 15(4), pp. 315-322. 4. J. Lu (2002) Passenger vehicle interior noise reduction by laminated side glass, Proceedings of Internoise 2002, Detroit, MI. 5. J.D. Ferry (1980) Viscoelastic properties of polymers, John Wiley, NY, pp. 437-453. 6. L.H. Sperling, J.J. Fay (1990) Factors which affect the glass transition and damping capability of polymers, Polymers for Advanced Technologies, 2, pp. 49-56. 7. E. Kerwin (1959) Damping of flexural waves by a constrained viscoelastic layer, The Journal of Acoustical Society of America, 31, pp. 952-962. 8. W. Zeng, S. Li (2002) Effect of components (acrynitril and acrylate acid) on damping properties of poly(styrene-acrynitril)/poly(ethylacetate-n-butylacrylate) latex interpenetrating polymer networks, Journal of Applied Polymer Science, 84(4), pp. 821-826. 9. D. Ratna, N.R. Manoj, L. Chandrasekhar, B.C. Chakraborty (2004) Novel epoxy compositions for vibration damping applications, Polymers for Advanced Technologies, 15, pp. 583-586. 10. C. Li, G. Wu, F. Xiao, C. Wu (2007) Damping behavior of sandwich beam laminated with CIIR/petroleum resins blends by DMA measurement, Journal of Applied Polymer Science, 106, pp. 2472-2478. 11. F.J. Fahy, P. Gardonio (2007) Sound and structural vibration: radiation, transmission and response, Academic press, Oxford, pp. 143-144.
{}
## Tbilisi Mathematical Journal ### On some new results for non-decreasing sequences Hüseyin Bor #### Abstract In this paper, a general theorem on absolute Riesz summability factors of infinite series is proved under weaker conditions. Also we have obtained some new and known results. #### Article information Source Tbilisi Math. J., Volume 10, Issue 2 (2017), 57-64. Dates Received: 31 October 2016 Accepted: 15 December 2016 First available in Project Euclid: 26 May 2018 Permanent link to this document https://projecteuclid.org/euclid.tbilisi/1527300043 Digital Object Identifier doi:10.1515/tmj-2017-0025 Mathematical Reviews number (MathSciNet) MR3627158 Zentralblatt MATH identifier 1376.40007 #### Citation Bor, Hüseyin. On some new results for non-decreasing sequences. Tbilisi Math. J. 10 (2017), no. 2, 57--64. doi:10.1515/tmj-2017-0025. https://projecteuclid.org/euclid.tbilisi/1527300043 #### References • H. Bor, On two summability methods, Math. Proc. Camb. Philos Soc., 97 (1985), 147-149. • H. Bor, A note on ${\mid{\bar{N},p_n}\mid}_k$ summability factors of infinite series, Indian J. Pure Appl. Math., 18 (1987), 330-336. • H. Bor, On local property of ${\mid{\bar{N},p_n;\delta}\mid}_k$ summability of factored Fourier series, J. Math. Anal. Appl., 179 (1993), 646-649. • H. Bor, A study on absolute Riesz summability factors, Rend. Circ. Mat. Palermo (2), 56 (2007), 358-368. • H. Bor, Quasi-monotone and almost increasing sequences and their new applications, Abstr. Appl. Anal. 2012, Art. ID 793548, 6 pp. • E. Cesàro, Sur la multiplication des séries, Bull. Sci. Math., 14 (1890), 114-120. • T. M. Flett, On an extension of absolute summability and some theorems of Littlewood and Paley, Proc. London Math. Soc., 7 (1957), 113-141. • T. M. Flett, Some more theorems concerning the absolute summability of Fourier series and power series, Proc. London Math. Soc., 8 (1958), 357-387. • G. H. Hardy, Divergent Series, Oxford Univ. Press., Oxford, (1949). • K. N. Mishra, On the absolute Nörlund summability factors of infinite series, Indian J. Pure Appl. Math., 14 (1983), 40-43. • K. N. Mishra and R. S. L. Srivastava, On ${\mid{\bar{N},p_n}\mid}$ summability factors of infinite series, Indian J. Pure Appl. Math., 15 (1984), 651-656. • W. T. Sulaiman, A note on $|A|_{k}$ summability factors of infinite series, Appl. Math. Comput., 216 (2010), 2645-2648.
{}
# pdfx causes soul to stop working I noticed that adding pdfx package to my document stopped the soul package from highlighting the text in yellow. \documentclass[a4paper,12pt]{article} \usepackage[a-1b]{pdfx} % comment this out \usepackage{color} \usepackage{soulutf8} \begin{document} \hl{TODO} \end{document} The example above does not report any errors, but results in non-highlighted text, as shown below. Removing the pdfx package reverts everything to normal behavior. The order of the packages does not seem to have an immediate effect. • I didn't encounter any problem when compiling with pdflatex (TeXLive 2018). To help you, we need to know the exact error message or the log file. – Marian G. Dec 13 '19 at 10:57 • What is your TeX-engine? Compiling with XeLaTeX or LuaLaTeX is creating a problem. It needs -shell-escape option in the command. – Niranjan Dec 13 '19 at 11:00 • PDF Engine: pdflatex -synctex=1 -shell-escape -interaction=nonstopmode %.tex TeXLive version: texlive-bin 2019.51075-4 – jureslak Dec 13 '19 at 11:03 • I uploaded the log file here: pastebin.com/2FYtFWtm, and there are not error messages. – jureslak Dec 13 '19 at 11:05 This is due to a long-standing incompability of soul with xcolor (which is loaded by pdfx). See https://tex.stackexchange.com/a/48502/2388 \documentclass[a4paper,12pt]{article} \usepackage[a-1b]{pdfx} % comment this out %\usepackage{color} \usepackage{soulutf8} \usepackage{etoolbox} \makeatletter \patchcmd{\SOUL@ulunderline}{\dimen@}{\SOUL@dimen}{}{} \patchcmd{\SOUL@ulunderline}{\dimen@}{\SOUL@dimen}{}{} \patchcmd{\SOUL@ulunderline}{\dimen@}{\SOUL@dimen}{}{} \newdimen\SOUL@dimen \makeatother \begin{document} \hl{TODO} \end{document}
{}
Suppose I have a collection of $n$ vectors $C \subset \mathbb{F}_2^n$. They are of course spanned by the canonical set of $n$ basis vectors. What I would like to find is a much smaller (~ $\log n$) collection of basis vectors that span a collection of vectors which well approximate $C$. That is, I would like basis vectors $b_1,\ldots,b_k$ such that for every $v \in C$, there exists a $u \in span(b_1,\ldots,b_k)$ such that $||u-v||_1 \leq \epsilon$. When is this possible? Is there a property that $C$ might posses to allow such a sparse approximation?
{}
## My observations on the Pi Manifesto. An enlightening discussion about pi and tau. ### My observations on the Pi Manifesto. The author had a nice try with pi "showing up" in places, but just a few things I've seen so far (after looking for only a few minutes): 1. "Hey I have more functions than you!" isn't exactly a good argument. This author is doing himself exactly what the article is accusing "tauists" of doing --- conveniently asking questions that have pi as their answer. 2. I would argue that a lot of the functions in which pi is more convenient are much more complex and found in higher areas of mathematics, and those in which tau is found are much more elementary --- basically that pi is based off tau, rather than the other way around. 3. Some of those functions have the pi in there because it was a simplification from things cancelling out with tau ... like the formula for an ellipse. Sure, pi a b is more compact, but where does it come from? Integrating a polar form from 0 to 2pi ... or 0 to tau. 4. In the section on Euler's identity, if the author wants to say the result with pi is "stronger" he should say WHY it is stronger. 5. As for the 1/2 in the area formula for a circle ... it actually follows directly from Archimedes' original calculation of the area of a circle, which is that it has the same area as a triangle with its base equal to the circumference and its height equal to the radius --- in short, 1/2 C r. 6. The trigonometric functions having multiples of pi for their periods takes away again from the MEANING of those periods. When you say that sin(x) has a period of 2pi and tan(x) has a period of pi, big deal. You've said they're numbers. When you say the periods are tau and 1/2 tau, you realize something ... sin(x) takes 1 full circle (1 tau) to repeat itself, and tan(x) takes 1/2 a circle (1/2 tau) to do the same. 7. The argument for pi being convenient for area isn't bad, but I would argue measures of length (1-dimensional) are more fundamental than measures of area (2-dimensional). All other dimensions are extensions of 1 dimension, not 2. In grade school you learn that if you double the length in a figure, you quadruple the area, and multiply the volume by 8. Should we be teaching that if you double the area, you multiply the length by √(2) and the volume by 2√(2)? Here's the thing ... I don't advocate COMPLETELY getting rid of pi like the author insinuates, but I do believe that tau is the more fundamental constant. As far as I'm concerned, tau should be taught as the more fundamental constant when students are learning it, and pi would be learned for convenience in calculating various other formulas. DMAshura Kindergarten Posts: 2 Joined: Wed Jul 13, 2011 11:24 am ### Re: My observations on the Pi Manifesto. DMAshura wrote:T1. "Hey I have more functions than you!" isn't exactly a good argument. This author is doing himself exactly what the article is accusing "tauists" of doing --- conveniently asking questions that have pi as their answer. Indeed. It is a very poor argument, and both manifestos would be far better without it. However, since the point is raised by the Tau Manifesto, it seems appropriate that a Pi Manifesto should respond. I think a better answer would be "The Tau Manifesto has cherry picked equations to make tau seem like a more universal constant. We could just as easily cherry-pick equations (i.e. these...), but the argument basically devolves into ridiculousness. Call it a draw on this point." DMAshura wrote:2. I would argue that a lot of the functions in which pi is more convenient are much more complex and found in higher areas of mathematics, and those in which tau is found are much more elementary --- basically that pi is based off tau, rather than the other way around. I agree entirely. Which is why I generally raise the point that tau may be more useful pedagogically. Teach students to use the thing that is going to make their lives easiest in the context in which they are working, then let them figure out the generalizations later. Most students are introduced to pi/tau in the context of basic trigonometry or geometry, where the choice of tau is probably easier to understand. Hence I would prefer to teach tau, and have people learn pi as a historical artifact, or in the contexts where they might actually want to use it. DMAshura wrote:4. In the section on Euler's identity, if the author wants to say the result with pi is "stronger" he should say WHY it is stronger. I'm not sure that "stronger" is the word that I would choose, but I definitely prefer identity with pi over tau on purely aesthetic grounds. You get several basic constants, and only addition, multiplication, and exponentiation in an equality with a zero on the right (that being the sort that you see over and over again in basic algebra). As to it being a "stronger" result, maybe it is because the geometric interpretation is clearer (i.e. that there is a rotation). However, if one makes that argument, maybe $e^{i\pi/2} = i$ would be an even better result, as it takes use through a one-quarter turn. Maybe $e^{i(\tau/4)} = i$ would be the best result, as the quarter turn is clear. That being said, I repeat that I much prefer the original version with pi for purely aesthetic reasons. DMAshura wrote:Here's the thing ... I don't advocate COMPLETELY getting rid of pi like the author insinuates, but I do believe that tau is the more fundamental constant. As far as I'm concerned, tau should be taught as the more fundamental constant when students are learning it, and pi would be learned for convenience in calculating various other formulas. This. I think that people on both sides of the issue are blowing things way out of proportion. Pi is not wrong, it just isnt' always the most convenient notation. That being said, I do agree that tau is better for elementary students, and that pi is the wrong thing to teach to those students. Given that Hartl and Palais seem to want to restructure the curriculum, perhaps the stridency of their position is understandable and required in order to make their point. xander xander University Posts: 154 Joined: Fri Feb 11, 2011 12:14 am Location: Sparks, NV, USA ### Re: My observations on the Pi Manifesto. xander wrote:I agree entirely. Which is why I generally raise the point that tau may be more useful pedagogically. Teach students to use the thing that is going to make their lives easiest in the context in which they are working, then let them figure out the generalizations later. For now, while π is ubiquitous, just teach the students that τ is more fundamental, that's π is more or less a mistake, and that they'll encounter situations using π in which things don't make intuitive sense. Do some equations mixing and matching the two, then keep teaching them the π-based equations they'll actually use. "You'll often experience equations involving 2π. Keep it in the back of your head why it's there." Make sure to show the cases of 2π/2 explicitly before cancelling them out. engineer Kindergarten Posts: 2 Joined: Fri Sep 16, 2011 10:03 am ### Re: My observations on the Pi Manifesto. DMAshura wrote:1. "Hey I have more functions than you!" isn't exactly a good argument. This author is doing himself exactly what the article is accusing "tauists" of doing --- conveniently asking questions that have pi as their answer. Definitely true, especially considering that the Tau Manifesto's list came in the introduction and was obviously just meant to show a few examples, not to be an exhaustive list. I have started a topic about this point here: viewtopic.php?f=30&t=403 DMAshura wrote:2. I would argue that a lot of the functions in which pi is more convenient are much more complex and found in higher areas of mathematics, and those in which tau is found are much more elementary --- basically that pi is based off tau, rather than the other way around. I think the main reason most of the formulas are from higher areas of mathematics is that there are more formulas in those areas, making them easier to cherry-pick. DMAshura wrote:3. Some of those functions have the pi in there because it was a simplification from things cancelling out with tau ... like the formula for an ellipse. Sure, pi a b is more compact, but where does it come from? Integrating a polar form from 0 to 2pi ... or 0 to tau. Yeah, this is basically the basis of why tau is more natural, even when pi is simpler. Even in formulas like the area of a circle, pi is only there because of cancellation with a factor of 1/2 that comes via triangles or the power rule. The same thing goes for the area of an ellipse since it is just a circle of radius a stretched in one direction by a factor of b/a, meaning the area will be multiplied by b/a. In addition, there is actually another version of the formula that is simpler with tau. For an ellipse defined by the equation ax^2+bxy+cy^2=1 is τ/√(4ac-b^2) DMAshura wrote:4. In the section on Euler's identity, if the author wants to say the result with pi is "stronger" he should say WHY it is stronger. Exactly what I was thinking. I assume he meant because the tau version comes "trivially" by squaring the pi version; the problem with this is that it's a lie--I could just as easily say the pi version comes "trivially" by squaring the eta version, and in general the n version comes "trivially" by squaring the n/2 version. Both versions really come from Euler's formula, and neither is more trivial than the other. DMAshura wrote:5. As for the 1/2 in the area formula for a circle ... it actually follows directly from Archimedes' original calculation of the area of a circle, which is that it has the same area as a triangle with its base equal to the circumference and its height equal to the radius --- in short, 1/2 C r. 6. The trigonometric functions having multiples of pi for their periods takes away again from the MEANING of those periods. When you say that sin(x) has a period of 2pi and tan(x) has a period of pi, big deal. You've said they're numbers. When you say the periods are tau and 1/2 tau, you realize something ... sin(x) takes 1 full circle (1 tau) to repeat itself, and tan(x) takes 1/2 a circle (1/2 tau) to do the same. Yet another reason why tau is just so much more elegant. It's actually pretty funny to look at the Wikipedia article for pi where it talks about how pi is important because the periods of certain functions are multiples of pi, such as 2π. How about tau is important because the period of many functions is equal to tau? In addition, the period of radians themselves is tau. Also, all of the "tau is just a multiple of pi" arguments don't work anyway, because we can just as easily say "pi is just a fraction of tau." DMAshura wrote:7. The argument for pi being convenient for area isn't bad, but I would argue measures of length (1-dimensional) are more fundamental than measures of area (2-dimensional). All other dimensions are extensions of 1 dimension, not 2. In grade school you learn that if you double the length in a figure, you quadruple the area, and multiply the volume by 8. Should we be teaching that if you double the area, you multiply the length by √(2) and the volume by 2√(2)? Agreed. Pi-ists will often try to think of crazy convoluted reasons to say that area is more fundamental, but really it all boils down to this. There is no way to seriously say that two dimensions are more fundamental than one. DMAshura wrote:Here's the thing ... I don't advocate COMPLETELY getting rid of pi like the author insinuates, but I do believe that tau is the more fundamental constant. As far as I'm concerned, tau should be taught as the more fundamental constant when students are learning it, and pi would be learned for convenience in calculating various other formulas. I would agree with this as well. I think as long as we already have pi, we might as well keep it for convenience once we switch over and just define it as τ/2. That way we still know tau is more fundamental, but in the rare case where pi is simpler we have a convenient shorthand--like using semiperimeter instead of 1/2*perimeter. τ>π 1=0 Mathlete Posts: 50 Joined: Sun Mar 19, 2017 11:01 am ### Re: My observations on the Pi Manifesto. xander wrote:As to it being a "stronger" result, maybe it is because the geometric interpretation is clearer (i.e. that there is a rotation). However, if one makes that argument, maybe $e^{i\pi/2} = i$ would be an even better result, as it takes use through a one-quarter turn. Maybe $e^{i(\tau/4)} = i$ would be the best result, as the quarter turn is clear. How is the turn clearer in the pi version? One of the reasons I prefer the tau version is the e^(iτ)=1 represents a full turn very clearly by getting you back where you started (at 1 or e^0), while the pi version is only a half turn. The pi version, e^(iπ)=-1 is still pretty clearly a half-turn, but the commonly used e^(iπ)+1=0 version completely destroys the geometric interpretation. It's not even just a turn anymore; it's a turn followed by a random shift to the right. In order for the rotation to be obvious from the equation, the result needs to be on the unit circle, but this version moves the result off the unit circle, to an inelegant zero. τ>π 1=0 Mathlete Posts: 50 Joined: Sun Mar 19, 2017 11:01 am ### Who is online Users browsing this forum: No registered users and 1 guest Fatal: Not able to open ./cache/data_global.php
{}
## Formulae Used in Trigonometric Proofs There must be any number of trigonometric identities that can be proved. Really the only way to study them is to do lots of examples. It helps first to have a summary of the equations. There are five very important equations which can also be used to solve many trigonometric formulae: There are also some very useful formulae between the various trigonometric functions:
{}