text
stringlengths
100
957k
meta
stringclasses
1 value
# Simple algebra • May 20th 2010, 12:10 PM Mukilab Simple algebra $x^2-8x+23=(x-p)^2+q$ for all values of x Find p and q, does this mean I can just use x as anything such as 0? if so using simultaneous equations I got $-8=p^2-p^2$ What now? • May 20th 2010, 12:28 PM e^(i*pi) Quote: Originally Posted by Mukilab $x^2-8x+23=(x-p)^2+q$ for all values of x Find p and q, does this mean I can just use x as anything such as 0? if so using simultaneous equations I got $-8=p^2-p^2$ What now? No, you should complete the square on the left hand side to get it into the form on the right, then find p and q by comparing coefficients To get you started $x^2+bx = \left(x+\frac{b}{2}\right)^2 - \frac{b^2}{4} $ • May 20th 2010, 12:50 PM Mukilab Quote: Originally Posted by e^(i*pi) No, you should complete the square on the left hand side to get it into the form on the right, then find p and q by comparing coefficients To get you started $x^2+bx = \left(x+\frac{b}{2}\right)^2 - \frac{b^2}{4} $ Do not know what you mean by " complete the square on the left " or "comparing coefficients" • May 20th 2010, 01:51 PM e^(i*pi) Quote: Originally Posted by Mukilab Do not know what you mean by " complete the square on the left " or "comparing coefficients" Completing the square is covered in post 3 of this topic: http://www.mathhelpforum.com/math-he...tic-curve.html Comparing coefficients says that two equations are equal if and only if the coefficients on each side match. For example given that ax^2+bx+c = x^2 - 12x+1 comparing coefficients tells you that a = 1, b=-12 and c = 1 • May 21st 2010, 08:30 AM Mukilab
{}
• ### On the Convergence of the TTL Approximation for an LRU Cache under Independent Stationary Request Processes(1707.06204) July 9, 2018 cs.NI, cs.PF The modeling and analysis of an LRU cache is extremely challenging as exact results for the main performance metrics (e.g. hit rate) are either lacking or cannot be used because of their high computational complexity for large caches. As a result, various approximations have been proposed. The state-of-the-art method is the so-called TTL approximation, first proposed and shown to be asymptotically exact for IRM requests by Fagin. It has been applied to various other workload models and numerically demonstrated to be accurate but without theoretical justification. In this paper we provide theoretical justification for the approximation in the case where distinct contents are described by independent stationary and ergodic processes. We show that this approximation is exact as the cache size and the number of contents go to infinity. This extends earlier results for the independent reference model. Moreover, we establish results not only for the aggregate cache hit probability but also for every individual content. Last, we obtain bounds on the rate of convergence. • ### Sharing LRU Cache Resources among Content Providers: A Utility-Based Approach(1702.01823) Feb. 6, 2017 cs.NI In this paper, we consider the problem of allocating cache resources among multiple content providers. The cache can be partitioned into slices and each partition can be dedicated to a particular content provider, or shared among a number of them. It is assumed that each partition employs the LRU policy for managing content. We propose utility-driven partitioning, where we associate with each content provider a utility that is a function of the hit rate observed by the content provider. We consider two scenarios: i)~content providers serve disjoint sets of files, ii)~there is some overlap in the content served by multiple content providers. In the first case, we prove that cache partitioning outperforms cache sharing as cache size and numbers of contents served by providers go to infinity. In the second case, It can be beneficial to have separate partitions for overlapped content. In the case of two providers, it is usually always beneficial to allocate a cache partition to serve all overlapped content and separate partitions to serve the non-overlapped contents of both providers. We establish conditions when this is true asymptotically but also present an example where it is not true asymptotically. We develop online algorithms that dynamically adjust partition sizes in order to maximize the overall utility and prove that they converge to optimal solutions, and through numerical evaluations, we show they are effective. • ### Spanning connectivity in a multilayer network and its relationship to site-bond percolation(1402.7057) May 27, 2016 cond-mat.stat-mech We analyze the connectivity of an $M$-layer network over a common set of nodes that are active only in a fraction of the layers. Each layer is assumed to be a subgraph (of an underlying connectivity graph $G$) induced by each node being active in any given layer with probability $q$. The $M$-layer network is formed by aggregating the edges over all $M$ layers. We show that when $q$ exceeds a threshold $q_c(M)$, a giant connected component appears in the $M$-layer network---thereby enabling far-away users to connect using `bridge' nodes that are active in multiple network layers---even though the individual layers may only have small disconnected islands of connectivity. We show that $q_c(M) \lesssim \sqrt{-\ln(1-p_c)}\,/{\sqrt{M}}$, where $p_c$ is the bond percolation threshold of $G$, and $q_c(1) \equiv q_c$ is its site percolation threshold. We find $q_c(M)$ exactly for when $G$ is a large random network with an arbitrary node-degree distribution. We find $q_c(M)$ numerically for various regular lattices, and find an exact lower bound for the kagome lattice. Finally, we find an intriguingly close connection between this multilayer percolation model and the well-studied problem of site-bond percolation, in the sense that both models provide a smooth transition between the traditional site and bond percolation models. Using this connection, we translate known analytical approximations of the site-bond critical region, which are functions only of $p_c$ and $q_c$ of the respective lattice, to excellent general approximations of the multilayer connectivity threshold $q_c(M)$. • ### Computing Traversal Times on Dynamic Markovian Paths(1303.3660) March 15, 2013 cs.NI, cs.DS In source routing, a complete path is chosen for a packet to travel from source to destination. While computing the time to traverse such a path may be straightforward in a fixed, static graph, doing so becomes much more challenging in dynamic graphs, in which the state of an edge in one time slot (i.e., its presence or absence) is random, and may depend on its state in the previous time step. The traversal time is due to both time spent waiting for edges to appear and time spent crossing them once they become available. We compute the expected traversal time (ETT) for a dynamic path in a number of special cases of stochastic edge dynamics models, and for three edge failure models, culminating in a surprisingly challenging yet realistic setting in which the initial configuration of edge states for the entire path is known. We show that the ETT for this "initial configuration" setting can be computed in quadratic time, by an algorithm based on probability generating functions. We also give several linear-time upper and lower bounds on the ETT. • ### Characterizing Continuous Time Random Walks on Time Varying Graphs(1112.5762) Dec. 2, 2012 physics.soc-ph, cs.SI In this paper we study the behavior of a continuous time random walk (CTRW) on a stationary and ergodic time varying dynamic graph. We establish conditions under which the CTRW is a stationary and ergodic process. In general, the stationary distribution of the walker depends on the walker rate and is difficult to characterize. However, we characterize the stationary distribution in the following cases: i) the walker rate is significantly larger or smaller than the rate in which the graph changes (time-scale separation), ii) the walker rate is proportional to the degree of the node that it resides on (coupled dynamics), and iii) the degrees of node belonging to the same connected component are identical (structural constraints). We provide examples that illustrate our theoretical findings. • ### Optimal Threshold Control by the Robots of Web Search Engines with Obsolescence of Documents(1201.4150) Jan. 19, 2012 cs.NI A typical web search engine consists of three principal parts: crawling engine, indexing engine, and searching engine. The present work aims to optimize the performance of the crawling engine. The crawling engine finds new web pages and updates web pages existing in the database of the web search engine. The crawling engine has several robots collecting information from the Internet. We first calculate various performance measures of the system (e.g., probability of arbitrary page loss due to the buffer overflow, probability of starvation of the system, the average time waiting in the buffer). Intuitively, we would like to avoid system starvation and at the same time to minimize the information loss. We formulate the problem as a multi-criteria optimization problem and attributing a weight to each criterion. We solve it in the class of threshold policies. We consider a very general web page arrival process modeled by Batch Marked Markov Arrival Process and a very general service time modeled by Phase-type distribution. The model has been applied to the performance evaluation and optimization of the crawler designed by INRIA Maestro team in the framework of the RIAM INRIA-Canon research project. • ### Dynamic Coverage of Mobile Sensor Networks(1101.0376) Jan. 1, 2011 cs.NI In this paper we study the dynamic aspects of the coverage of a mobile sensor network resulting from continuous movement of sensors. As sensors move around, initially uncovered locations are likely to be covered at a later time. A larger area is covered as time continues, and intruders that might never be detected in a stationary sensor network can now be detected by moving sensors. However, this improvement in coverage is achieved at the cost that a location is covered only part of the time, alternating between covered and not covered. We characterize area coverage at specific time instants and during time intervals, as well as the time durations that a location is covered and uncovered. We further characterize the time it takes to detect a randomly located intruder. For mobile intruders, we take a game theoretic approach and derive optimal mobility strategies for both sensors and intruders. Our results show that sensor mobility brings about unique dynamic coverage properties not present in a stationary sensor network, and that mobility can be exploited to compensate for the lack of sensors to improve coverage.
{}
# Find the Median 30 , 35 , 33 , 31 , 39 , 29 , 32 30 , 35 , 33 , 31 , 39 , 29 , 32 Arrange the terms in ascending order. 29,30,31,32,33,35,39 The median is the middle term in the arranged data set. 32 Find the Median 30 , 35 , 33 , 31 , 39 , 29 , 32 ### Solving MATH problems We can solve all math problems. Get help on the web or with our math app Scroll to top
{}
Hide # University Zoning In an effort to contain a recent pandemic, a certain university has decided to implement a zoning system. The school campus is a rectangular plot of land and is divided into a grid with $R$ rows and $C$ columns. Rows and columns are numbered from $1$ to $R$ and $C$ respectively. The top-most row is the first row, and the left-most column is the first column. Each of the $F$ faculties has been allocated some grid cells. For convenience, each of the $F$ faculties have unique code numbers from $1$ to $F$ inclusive. Cells belong to at most $1$ faculty. $S$ students, each with unique student number $D$, have enrolled in the university. Within a faculty, the student with the smallest student number will occupy the top-most unoccupied cell. If there are ties, the left-most available cell will be assigned to that student. This repeats for the student with the next smallest student number, until every student within that faculty has been assigned to a cell. Every faculty has enough spots for its enrolled students, and has at least one enrolled student. On the first day of school, students were found to be in random cells all over the campus. Some cells might even contain more than one student! You are the safety officer, and have been given the following task: There must be at least $G$ faculties with their compliance target met. A compliance target is met if there is at least $T$ students found in their assigned cells. Every faculty has an assigned $T$ value. To complete the task, you will instruct students to move to their assigned cells. Students can take $1$ step to move to any of the four adjacent cells (up, down, left, right), can occupy cells belonging to other faculties, can occupy the same cell as other students and can walk past other students if necessary. If a student is in their assigned cell, that student is counted as being in their assigned cell, even if there are other students currently occupying it. What is the minimum number of steps the students have to take in order for you to complete your task? Refer to the diagrams below for clarification on how the assignment works for the sample inputs. ## Input The first line contains five integers $1 \le R \le 10^9$, $1 \le C \le 10^9$, $1 \le F \le 10^2$, $1 \le S \le 10^5$ and $0 \le G \le F$. The next $F$ lines describes the cells assigned to each faculty, from faculty $1$ to faculty $F$ in ascending order. Each of the $F$ lines contains an integer $1 \le K \le 10^3$ the number of cells assigned to that faculty. The rest of the line contains $K$ integer pairs $r_1\; c_1\; r_2\; c_2\; \ldots \; r_ k\; c_ k$, each $r\; c$ pair describing the row and column coordinates of a cell allocated to that faculty. The next $S$ lines contain four integers: $r\; c$, the coordinates of each of the students on the first day, student number $1 \le D \le 10^9$ and $f$, the faculty that student is enrolled in. All cell coordinates lie inside the school boundaries. The final line contains the $T$ values ($0 \leq T \leq E$ where $E$ is the total number of students enrolled in that faculty) for each faculty, from faculty $1$ to faculty $F$. ## Output Output the minimum number of steps required. 1. ($2$ Points): Sample. 2. ($12$ Points): $T = E$ for all faculties, $G = F$, $R = 1$. 3. ($20$ Points): $T = E$ for all faculties, $G = F$. 4. ($27$ Points): $T = E$ for all faculties. 5. ($39$ Points): No additional constraint. Sample Input 1 Sample Output 1 3 5 2 5 2 4 1 1 1 2 1 5 2 1 3 2 5 3 3 3 5 1 1 1 1 1 3 2 1 1 5 3 1 2 5 4 2 3 3 5 2 3 2 1 Sample Input 2 Sample Output 2 3 5 1 2 1 2 1 1 1 2 3 5 1 1 1 3 2 1 1 1 CPU Time limit 1 second Memory limit 1024 MB Difficulty 2.4easy Statistics Show
{}
## Woodin conference March 11, 2015 The conference in honor of Hugh Woodin’s 60th birthday will take place at Harvard University, on March 27-29, 2015. The meeting is partially supported by the Mid-Atlantic Mathematical Logic Seminar and the National Science Foundation. Funding is available to support participant travel. Please write to woodinbirthdayconference@gmail.com to apply for support, and to notify the organizers if you are planning to attend. The list of speakers is as follows: • H. Garth Dales • Qi Feng • Matthew D. Foreman • Ronald Jensen • Alexander S. Kechris • Menachem Magidor • Donald A. Martin • Grigor Sargsyan • Theodore A. Slaman • John R. Steel. We expect to publish proceedings of the conference, together with select additional research and survey papers, through the series Contemporary Mathematics, of the AMS. The editors of the proceedings are myself, James Cummings, Peter Koellner, and Paul Larson. Please contact me for information regarding the proceedings. Additional information can be found at the conference website. ## Cryptic marks February 5, 2015 New scientist recently ran a series on articles on “How to think about…” One of them, by Richard Webb and published December 13, 2014, was about infinity. It contains this quote: Woodin’s notepads consist mainly of cryptic marks he uses to focus his attention, to the occasional consternation of fellow plane passengers. “If they don’t try to change seats they ask me if I’m an artist,” he says. David Roberts wondered on Google+ what these cryptic marks look like. This reminded me of some pictures I took of them at the Conference on inner model theory at UC Berkeley last year. ## 580 -Partition calculus (5) April 21, 2009 1. Larger cardinalities We have seen that ${\omega\rightarrow(\omega)^n_m}$ (Ramsey) and ${\omega\rightarrow[\omega]^n_\omega}$ (${\mbox{Erd\H os}}$-Rado) for any ${n,m<\omega.}$ On the other hand, we also have that ${2^\kappa\not\rightarrow(3)^2_\kappa}$ (${\mbox{Sierpi\'nski}}$) and ${2^\kappa\not\rightarrow(\kappa^+)^2}$ (${\mbox{Erd\H os}}$-Kakutani) for any infinite ${\kappa.}$ Positive results can be obtained for larger cardinals than ${\omega}$ if we relax the requirements in some of the colors. A different extension, the ${\mbox{Erd\H os}}$-Rado theorem, will be discussed later. Theorem 1 (${\mbox{Erd\H os}}$-Dushnik-Miller) For all infinite cardinals ${\lambda,}$ ${\lambda\rightarrow(\lambda,\omega)^2.}$ This was originally shown by Dushnik and Miller in 1941 for ${\lambda}$ regular, with ${\mbox{Erd\H os}}$ providing the singular case. For ${\lambda}$ regular one can in fact show something stronger: Theorem 2 (${\mbox{Erd\H os}}$-Rado) Suppose ${\kappa}$ is regular and uncountable. Then $\displaystyle \kappa\rightarrow_{top}(\mbox{Stationary},\omega+1)^2,$ which means: If ${f:[\kappa]^2\rightarrow2}$ then either there is a stationary ${H\subseteq\kappa}$ that is ${0}$-homogeneous for ${f}$, or else there is a closed subset of ${\kappa}$ of order type ${\omega+1}$ that is ${1}$-homogeneous for ${f}$. (Above, top stands for “topological.”)
{}
# coherent analytic cohomology vanishes for q > 2dim Given a complex-analytic manifold of dimension $d$, why does the cohomology of coherent sheaves vanish in dimension $> 2d$ (without using GAGA)? • Shouldn't the cohomology of coherent sheaves vanish above $d$? This should be somewhere in "Coherent Analytic Sheaves", by Grauert and Remmert, but I don't have the book here so I can't check. Jun 30 '10 at 14:03 • Angelo, that is a very good point. I'm certain it isn't in the CAS book (I read it quite thoroughly, and there's no discussion of higher sheaf cohomology in that book apart from the chapter on Grauert's Higher Direct Image Theorem), so a better place to try is G&R's other book Theory of Stein Spaces. It would be nice to also have the result without smoothness. Jun 30 '10 at 15:03 • BCnrd the required vanishing follows from Andreotti-Grauert.See corollary4.15 page 428 of Demailly's book on CAG. Jun 30 '10 at 19:45 Note that it is not necessary to say to avoid GAGA, as GAGA has no relevance in the absence of compactness assumptions. Anyway, something much more general (and satisfying) is true: all topological sheaf cohomology on a (paracompact Hausdorff) analytic space vanishes beyond twice the analytic dimension. Here is a sketch of a proof. By metrization theorems, such spaces are metrizable. Also, connected components of analytic spaces are open, so cohomology is direct product of cohomologies on the connected components. We can therefore restrict attention to the connected case, so the underlying topological space is separable (i.e., countable base of opens); I think this latter fact is stated with reference in the Introduction of the book Theory of Stein Spaces. (If not, assume the given analytic space is separable, a very "practical" assumption!) By the local analytic "Noether normalization" (really Weierstrass Preparation), twice the analytic dimension equals the "topological dimension" in the sense of dimension theory as in Engelking's marvelous book "General topology" for separable metric spaces. (For separable metric spaces, various notions of topological dimension are proved to agree; all done in that book. For opens in a real Euclidean space it recovers the expected "dimension"!) That book shows open covers of separable metric spaces have refinements whose $(n+1)$-fold overlaps are empty for $n$ beyond the topological dimension (in one of the various equivalent senses of dimension: the "covering" dimension!). Now use equality of Cech and derived functor cohomology for paracompact Hausdorff spaces to conclude.
{}
# Properties of $l_q$-balls For a given $$q\in (0,1]$$, define the $$l_q$$-ball as $$\mathbb{B}_q(R_q)\mathrel{:=}\left\{\theta\in\mathbb{R}^d\,\middle\vert\,\sum_{j=1}^d \lvert\theta_j\rvert^q\leq R_q \right\}.$$ For a given integer $$s\in\{1,2,\dotsc,d\}$$, the best $$s$$-term approximation to a vector $$\theta^*\in\mathbb{R}^d$$ is defined as $$\Pi_s(\theta^*)\mathrel{:=}\arg\min_{\|\theta\|_0\leq s} \|\theta-\theta^*\|_2^2.$$ Show that the best $$s$$-term approximation satisfies $$\|\Pi_s(\theta^*)-\theta^*\|_2^2\leq(R_q)^{2/q}s^{1-2/q}.$$ I can see that $$\Pi_s(\theta^*)$$ has a closed-form, which takes the largest absolute value from $$\theta^*$$ and sets the remaining positions as $$0$$. I guess it is useful to consider the fact that for $$0, $$\|x\|_p\geq \|x\|_q$$. But I can only get $$(R_q/s)^{2/q}(d-s)^{2/q}$$, not $$s(R_q/s)^{2/q}$$ as in the conclusion. • Since your question is phrased as "show that ..." could you give us a little bit more information about where the question arose? In other words, is the desired conclusion something claimed in a paper you are reading, or a text you are following? Jan 19 at 22:17 • @YemonChoi It is from the book High-Dimensional Statistics: A Non-Asymptotic Viewpoint by Martin J. Wainwright. It is actually the last question in Exercise 7.2 (Page 230). I hope to solve this problem because in Page 196 it says Exercise 7.2 can help understand the interpretations of the membership in the $l_q$ ball. Jan 20 at 0:14 • Hepdrey, you may find more luck getting a response on maths.stackexchange.com. Questions on exercises from textbooks are probably best there. I just asked such a question myself and got a response quickly! :-) Jan 20 at 10:10 WLOG, let $$\theta^*=(\theta^*_1,...,\theta^*_d)$$ with $$|\theta^*_1|\geq |\theta^*_2| \geq\cdots\geq |\theta^*_d|$$. Then we have $$\|\Pi_s(\theta^*)-\theta^*\|_2^2 = \sum_{j=s+1}^d |\theta^*_j|^2 \leq |\theta^*_s|^{2-q} \sum_{j=s+1}^d |\theta^*_j|^q = \left(\frac{1}{s} \sum_{i=1}^s |\theta^*_s|^q \right)^{\frac{2-q}{q}} \sum_{j=s+1}^d |\theta^*_j|^q \leq \left(\frac{1}{s} \sum_{i=1}^s |\theta^*_i|^q \right)^{\frac{2-q}{q}} \sum_{j=s+1}^d |\theta^*_j|^q = \left(\frac{1}{s} \right)^{\frac{2-q}{q}} \left(\sum_{i=1}^s |\theta^*_i|^q \right)^{\frac{2-q}{q}} \sum_{j=s+1}^d |\theta^*_j|^q \\ \leq \left(\frac{1}{s} \right)^{\frac{2-q}{q}} \left(\sum_{i=1}^d |\theta^*_i|^q \right)^{\frac{2-q}{q}} \sum_{j=1}^d |\theta^*_j|^q = \left(\frac{1}{s} \right)^{\frac{2-q}{q}} \left(\sum_{i=1}^d |\theta^*_i|^q \right)^{\frac{2}{q}} \leq (R_q)^{2/q}s^{1-2/q}.$$
{}
The ndgrid function is similar to meshgrid , but works for N-dimensional matrices. All functions can be set different boundaries for x, y, and z, to maximize your viewing enjoyment. You can extract the results of the named formulas to a worksheet by array entering the name (i. To use the Fourier functions, you must first enable the Analysis ToolPack. If you already know how to create a basic X-Y plot on Excel, then skip ahead to page 3 and the section called “Changing the Plot Appearance”. The equation displayed for the best-fit line shows m (slope) to be 2 and C (y-intercept) to be 50. The data should be in two adjacent columns with the x data in the left column. Both impedance terms are functions of frequency and mode. Note on the MODE. Plotting wave functions¶ Creating a wave function file ¶ The following script will do a calculation for a CO molecule and save the wave functions in a file ( CO. And what here is just a graph of the 1 s wave function going across some radius defined this way, and you can see that the probability--well, this is the wave function, so we would have to square it and think about the probability. Play around with these three controls so you get a sense of how. Well, for almost for years developing in C#, last month was the first time I ended up drawing graphs in an application I'm developing. Find the value of e raised to the power of a number. As far as making a stage plot goes, it doesn't need to be incredibly artistic. How to turn off function argument ToolTips in Microsoft Office. I need to incorporate this somehow into the loop I have used to generate the wave in my code, in order to plot, lets say, the first 5 harmonics as subplots. The SIN function syntax has the following arguments: Number Required. You can visualize the 'plot' of a wavefunction, i. %matplotlib inline. read_csv("sample-salesv2. Other common levels for square waves include and (digital signals). So if you want twenty one data points X = n times Range / 20. I tried to search for 'scipy triangle wave', and I found a post on stack exchange that reveals how to do a triangle wave with the sawtooth function. fem1d_function_10_display, a MATLAB code which reads three files defining a 1D piecewise linear finite element method (FEM) function and displays a plot. From the time graph, the period and the frequency can be obtained. You still haven't really given us sufficient detail. Piecewise Functions • We’ll show one way to define and plot them in Matlab without using loops. Create cosine graph/plot cosine wave in excel With Using Microsoft Excel, we can make a variety of curves from mathemathic functions such as trigonometric functions : sine curve, cosine, tangent, hyperbolic sine (sinh), cosec (cosecant), sec, etc. Square waves are equivalent to a sine wave at the same (fundamental) frequency added to an infinite series of odd-multiple sine-wave harmonics at decreasing amplitudes. To plot the Nyquist diagram from the open-loop transfer function of a system we need to determine the magnitude and the phase as functions of frequency. The example below will show you how to show multiple graphs in the same plot using plot command in MATLAB. When you write the program on the MATLAB editor or command window, you need to follow the three steps for the graph. That's a standing wave ratio of 1. Initially, the x and y axis points have been defined as empty lists and line-width (lw) has been set as 2. Charts are a great way to sort out data that you have stored in an Excel 2013 workbook. If Y is complex, then the plot function plots. I can obviously make the vertical box plot using Excel's automatic chart function, but it would fit in my document much better as a horizontal plot. As you can tell, it becomes hard to work with them downstream in your code. The plot at the right is an XY scatterplot of Normal Score against the data. Step 1: Create your data in excel like the one in figure 1 below. (a) At which values of x is the particle most likely to be found? Select all that apply. Smack dab in the middle of that measurement is a horizontal line called the sinusoidal axis. Or feeding a smaller square wave into the circuit to decrease the amplitude. From the distance graph the wavelength may be determined. You can see that the output from MATLAB is one period of the DTFT, but it's not the period normally plotted, which is from to. In this case, you may want to extract the data from this chart. You can copy the example data in the following table and paste it in cell A1 of a new Excel worksheet to see the SWITCH function in action. 1 (the default) gives you a right-sided sawtooth, 0 gives a left-sided one, and 0. The probability density function (PDF) of a random variable, X, allows you to calculate the probability of an event, as follows: For continuous distributions, the probability that X has values in an interval (a, b) is precisely the area under its PDF in the interval (a, b). In this section, we’ll add a second plot to the chart in Worksheet 02b. The worksheet range A1:A11 shows numbers of ads. † Assume all systems are isolated. y=sin(t) plot(t,y,type="l", xlab="time", ylab="Sine wave"). Michael Fowler, UVa. In other words we can think of the ECG as a graph, plotting electrical activity on the vertical axis against time on the horizontal axis. I want to plot a sin wave with x axis as voltage and y axis as phase and also a sine wave with x axis as time and y axis as voltage. Click on the "remove term" button to see less terms. It is named after the function sine, of which it is the graph. Beyond simple math and grouping (like " (x+2) (x-4)"), there are some functions you can use as well. Anyone who wants to draw graphs of functions will find this program useful. A scatter plot makes the most sense since I want to look at the relationship to strikeouts and walks. an answer to Mathematica Stack Exchange!. Examples of wave energy are light waves of a distant galaxy, radio waves received by a cell phone and the sound waves of an orchestra. An odd function either passes through the origin (0, 0) or is reflected through the origin. On the Data tab, in the Analysis group, click Data Analysis. Before diving into the parametric equations plot, we are going to define a custom Scilab function, named fPlot (). Plot a wind rose in Excel. Note on the MODE. Evening catches me north of nothing. A frequency distribution shows just how values in a data set are distributed across categories. Well, if you look back at the time-domain plot, the array that was passed to the fft() command has two full sine-wave cycles. Permanent link to this graph page. plot (abs (fftshift (X))) That leaves us with the question of labeling the frequency axis. The actual equation involves some unknown constants before A and that have to be determined through an extrapolation of this function to f(0). I have got data points of a sound sample from Audacity which I exported to a. In one dimension, wave functions are often denoted by the symbol ψ(x,t). Thanks for contributing an answer to Mathematica Stack Exchange! Please be sure to answer the question. Smack dab in the middle of that measurement is a horizontal line called the sinusoidal axis. In this tutorial, we will explore how the shape of the wave function is related to the physi-cal settings of a quantum system and, conversely, how we can determine information about the shape of the wave function from the physical situation. Enter your data values so that the raw data (measurement, test value, etc. In the compass function each arrow’s length corresponds to the magnitude of a data element and its pointing direction indicates the angle of the complex data. Planck's Law (Updated: 3/13/2018). The common spreadsheet of Microsoft Excel TM is an. This creates a matrix of axes and shows the relationship for each pair of columns in a DataFrame. This Excel tutorial explains how to use the Excel COMPLEX function with syntax and examples. Let's show you how to use FREQUENCY function to make frequency distribution in Excel. To display the equation and R-squared value on the graph, click on the Options tab. For formulas to show results, select them, press F2, and then press Enter. Biomedical Arabia 25,583 views. It also serves as an example of how Excel can make something as complex as 3D graphics as simple as a few worksheet cells connected by multiplication and addition. plot response for a High pass fi. The steps to draw a sine and cosine graphs in excel are: 1. To plot the point (x, y, z) in three-dimensions, we simply add the step of moving parallel to the z-axis by z units. The SIN function expects radians. SIN Excel function is an inbuilt trigonometric function in excel which is used to calculate the sine value of given number or in terms of trigonometry the sine value of a given angle, here the angle is a number in excel and this function takes only a single argument which is the input number provided. Let's say for example that we want to graph the entire thing (from 0 to 360. Sine wave Vpp and DC offset When the function generator is turned on, it outputs a sine wave at 1 kHz with amplitude of 100 mV PP (figure 4). However, you can use the colors argument to assign your own colors to each pie or wedge. Enter your data values so that the raw data (measurement, test value, etc. In our post for this we use pins 5 and 6 that means Timer 0. Suppose you wish to plot v = 2 sin(ωt) You wish to plot v vertically against time t horizontally. The important thing to remember is that ode45 can only solve a first order ODE. The data should be in two adjacent columns with the x data in the left column. This works when plotting as points but I want to form surfaces between the points. Here's wha. Number: 4 Names: a, b, x0, T Meanings: a = High-level, b = Low-level, x0 = X start, T = Period Lower Bounds: none Upper Bounds: none Script Access nlf_SquareWave (x,a,b,x0,T) Function File. Fortran files Makefile. Step 2: Plotting the half-sine function 2. Expand your Office skills. The LARGE function is useful when you want to retrieve the nth highest value from a set of data. In this tutorial, we are going to learn the syntax and common usages of Excel IF function, and then will have a closer. Borrowing a word from German, we say that a delta function is an eigenfunction (which could be translated \characteristic" or \particular" function) of position, meaning that it’s a function for which the particle’s position is precisely de ned. Sample Curve Parameters. 2 Nonlinear Curve Fits Nonlinear curve fitting is accommodated in KaleidaGraph through the General curve fit function. Tepring Crocker is a freelance copywriter and marketing consultant. The Trend Function as an Array Formula: If more than one new y-value is to be calculated by the Excel Trend function, the new values will be returned as an array. A wave packet is a form of wave function that has a well-defined position as well as momentum. x = -k a this is a linear relationship so the graph is a line, the slope is negative so the line is heading down. Here is the part of the problem that I am having a little trouble with:. We usually think of the subscripts as representing evenly spaced time intervals (seconds, minutes, months, seasons, years, etc. To find the period of this function, we first identify B, which is the number in front of x - or, in this case, it's π. Here is a plot of the square of our ve-bump wavefunction: 1. Brief Description. How to Create a Graph in Excel. Sawtooth wave. This tool graphs z = f (x,y) mathematical functions in 3D. If you intend to use Excel for this purpose, I encourage you to look through their help files to understand it, but here are a few notes. How to Create a Chart in Excel 2013. figure() ax = plt. Add void plot (double value); at line 13 to declare the plot function. The remaining part of the code is similar to the above code where we only have some changes. However, you can use the colors argument to assign your own colors to each pie or wedge. For this demonstration I will format the table in the image below into a Scatter chart and then into an Excel timeline. I do not know why the values you put in are not a proper square wave, and I suspect there is some odd detail in how MATLAB implimented the function $\text{square}$. If you really want to learn graphics programming with OpenGL than go to learnopengl. This instructs Excel to take the same rule you applied to A1 and C1 and apply it successively to (A2, C2), then (A3,C3), etc. STEP 1 Decide the starting and finishing values of time for the plot and the intervals at which you wish to plot your points. In order to calculate a fundamental frequency, you need the length of the system or wave as well as a. Enter your data values so that the raw data (measurement, test value, etc. The source code is copyrighted but freely distributed (i. The FREQUENCY function in Excel calculates how often values occur within the ranges you specify in a bin table. EasyFit allows to automatically or manually fit the Exponential distribution and 55 additional distributions to your data, compare the results, and select the best fitting model using the goodness of fit tests and interactive graphs. The frequency of a wave is the number of complete cycles per second that it undergoes. Plot it: Volia: A sine wave! For different frequencies, you can incorporate a scaling value into the time-value before it is fed into the sin() function. Use a calculator to find the values of sine and the cosine of the functions of x. It is a periodic, piecewise linear, continuous real function. This method works best when you want to integrate an equation with a larger number of integration points and only want to return a single value. Tepring Crocker is a freelance copywriter and marketing consultant. You can count and sum based on one criteria or multiple criteria. A frequency distribution shows just how values in a data set are distributed across categories. I had to do a fourier series for a sawtooth wave f(t) = t from 0-1 and simulate it in excel, and put it through a low pass filter, also done in excel. Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. So let's look at the wave function for the infinite square well. From the distance graph the wavelength may be determined. First consider this, for simple harmonic motion position and acceleration are proportional. But ψ(x,t) is not a real, but a complex function, the Schroedinger equation does not have real, but complex solutions. Then repeat the “Step Into” process. (at least we perceive it to be such a fact) Now looking at standard equation of a wave :. The equation for the 1 s orbital (ψ 1 s ) shows an exponential decay for the wave function on moving away from the nucleus. , from an oscilloscope). For example, a version marker of 2013 indicates that this function is available in Excel 2013 and all later versions. Electromagnetic waves can be represented by a sinusoidal wave motion. Steps to Create a Pivot Chart in Excel. where is a Lerch transcendent. For example “=MODE. Thus, a drawing of Y l,m as a contour plot on the surface of the sphere should greatly promote the understanding of the concept of the particle on a sphere and of the wave functions for the corresponding diatomic molecular rotation. a circular chart with the independent axis (angle) in degrees from 0 to 360 degrees and amplitude as the radial distance from the center point?. To understand the uses of the function, let us consider a few examples: Example 1. Charts are a great way to sort out data that you have stored in an Excel 2013 workbook. Because FREQUENCY returns an array, it must be entered as an array formula. And it is an even function. Radial Distribution Plots. I have to plot the wave function at the both valence band and conduction band. In contrast, wave functions for the particle on a sphere, that is, for a rotating diatomic molecule, are. For example, let’s plot the cosine function from 2 to 1. Before diving into the parametric equations plot, we are going to define a custom Scilab function, named fPlot (). If the first argument hax is an axes handle, then plot into this axes, rather than the current axes returned by gca. Standard ECG paper moves at 25 mm per second during real-time recording. Alternatively, select the waveform and hit the Delete key. from 0 latitude and proceed in a westerly direction, we can let T(x) denote the temperature at Single Variable Calculus: Early Transcendentals, Volume I Deciding Whether Equations Are Functions In Exercises 1-8, decide whether the equation. Head to the menu bar and choose "Insert". They are mostly standard functions written as you might expect. The function f(x) = x+1, for example, is a function because for every value of x you get a new value of f(x). To see how this works, consider the computation of eq. Graph is an open source application used to draw mathematical graphs in a coordinate system. The data we need. – All the downloads on this site are FREE and there are hundreds of them. Making statements based on opinion; back them up with references or personal experience. If you find a curved, distorted line, then your residuals have a non-normal distribution (problematic situation). It is named a sawtooth due to its resemblance to the teeth on the edge of a saw. Some of the heroes feel like cliches that are designated to their specific role in the issue like Voodoo. In this post we show how to produce a simple wind rose using Microsoft Excel or Open Office Calc. For example, you might want to include a sine wave in a drawing. Note on the MODE. How to make 3D surface plots in MATLAB ®. In this example, we used the bins number explicitly by assigning 20 to it. It is named after the function sine, of which it is the graph. With using Microsoft Excel we can make a variety of curves from trigonometric functions such as sine curve, cosine, tagent, hyperbolic sine (sinh), cosec, sec, etc. Fourier Series. Borrowing a word from German, we say that a delta function is an eigenfunction (which could be translated \characteristic" or \particular" function) of position, meaning that it’s a function for which the particle’s position is precisely de ned. Thus wave packets tend to behave classically and are easy (and fun) to visualize. csv",parse_dates=['date']) sales. If a function is registered as a worksheet function but tries to do something that only a macro-sheet function can do, the operation fails. It accepts a second parameter that determines the shape of the sawtooth. To create a square wave, you should change the line. An example of a logarithmic trend is the sales pattern of a highly anticipated new product, which typically sells in large quantities for a short time and then levels off. So the displacement of the particle in time t is the same as the displacement of the particle at x = 0 in the earlier time t − x / v. Questions are typically answered within 1 hour. Number: 4 Names: a, b, x0, T Meanings: a = High-level, b = Low-level, x0 = X start, T = Period Lower Bounds: none Upper Bounds: none Script Access nlf_SquareWave (x,a,b,x0,T) Function File. Thanks for contributing an answer to Mathematica Stack Exchange! Please be sure to answer the question. In this post, I will explain why the seemingly obvious table() function does not work, and I will demonstrate how the count() function in the ‘plyr’ package can achieve this goal. You can think of this as an amplitude and a phase. Learn the top 10 Excel formulas every world-class financial analyst uses on a regular basis. The advantage of Mathematica is that you don't need to calculate discrete numerical values for the functions: just give the equation and Mathematica will plot it. In that, when the sine wave phase is 00, 1800 and 360 0, the amplitude of the sine wave is 0 that means there is no EMF induced in the rotating coil. We know this for a fact, because we encounter it in our daily lives. Preparing the Data. Click on the shape and use the "Format" tab on the toolbar to make any adjustments. the values in the excel sheet are voltage and phase values with voltage being from column A to N and phase values are from column O to AB. Plotting Functions. They help you visualize your data and make it easier for you to analyze any trends that may exist. Fourier analysis and Fourier Synthesis: Symmetry can be ascertained by plotting the function in a graph paper and folding it along the y axis. The function is returns an array, so highlight a block of cells 5 rows by 3 columns, then click in the formula bar and enter: =LINEST(Y range, X's range, ,TRUE). Students who are studying quantum physics often find the graphing of wave functions and probability density curves difficult and time consuming. From the Start menu, select All programs > Altair HyperWorks 11. From the distance graph the wavelength may be determined. You need to determine a few things: (1) where does your graph start and end. Plots of results After running this code and creating plots via "make. I had to do a fourier series for a sawtooth wave f(t) = t from 0-1 and simulate it in excel, and put it through a low pass filter, also done in excel. 14(b) is not single-valued, so it cannot be a wave function. We first need to get historic stock prices – you can do that with this bulk stock quote downloader. However, refraction, reflection, or wave-current and wave-wave interactions are not considered. The simple way, you can draw the plot or graph in MATLAB by using code. 5 gives a triangle. Choose the Chart wizard as in Step 4. Importing and Exporting Data from MATLAB and Simulink to Excel Rev 021704 4 In this window, select ~ Create vectors from each column using column names. The unknowns A, gamma, w, and phi correspond in obvious ways to the parameters of eq. Click on the "remove term" button to see less terms. A square well. In the Options dialog box, click the General tab. I have summarized total 7 methods in this article. I hope this is the right place for this question. The easiest way to set a custom date format in Excel is to start from an existing format close to what you want. The SLOPE function is a built-in function in Excel that is categorized as a Statistical Function. A wave period is the time in seconds between two wave peaks and is inversely proportional to frequency. •Plot Magnitude Response •Low and High Frequency Asymptotes •Phase Approximation + •Plot Phase Response + •RCR Circuit •Summary E1. Now let's take y = A sin (kx − ωt) and make the dependence on x and t explicit by plotting y(x,t) where t is a separate axis, perpendicular to x and y. Make a sine graph in excel 2007 (plotting sine wave on Excel). The XY chart can be customized with your choice of trace lines. The time the wave takes to reach the position of the particle is x / v. Finding the coefficients, F’ m, in a Fourier Sine Series Fourier Sine Series: To find F m, multiply each side by sin(m’t), where m’ is another integer, and integrate:. Wave Graphs. For example, a version marker of 2013 indicates that this function is available in Excel 2013 and all later versions. With the command gca we get the handle to the current axes with which it is possible to set axis bounds. Since we’re plotting on. Finally, determine the sum of the values in column C to find the area. To do so, type =frequency(into a cell, and then press Ctrl+A to launch the Function Arguments dialog. ThinkorSwim, Ameritrade. A square well. Michael Fowler, UVa. Sample Curve Parameters. The Fourier Transform algorithm (particularly the Fast. The function you use for graphing, however, instead of returning a value, serves to plot or place one or more sets of. The Transfer Function fully describes a control system. These plots show the stability of the system when the loop is closed. Ideally, this plot should show a straight line. Complete the scatter plot in Figure 9-2 and underneath the scatter plot describe the type of relationship, if any, that appears to exist between price and quantity; you may choose either variable for the horizontal axis and the. Number: 4 Names: a, b, x0, T Meanings: a = High-level, b = Low-level, x0 = X start, T = Period Lower Bounds: none Upper Bounds: none Script Access nlf_SquareWave (x,a,b,x0,T) Function File. In section Basic Concepts of Factor Analysis we will explain in more detail how to determine. The zeros of a function are the values of the variable that make the function equal to zero. Learn more about mode shapes, wave function, wave, function, sinusoidal I am trying to plot position(x) as my x-axis. 55X 10 s 17 5+13x 10 Finet h Lt) using inverse laplace trans for m Q: Forthe cireut. Choose the Chart wizard as in Step 4. Select the range on data on a spreadsheet and click on the "Marked Scatter" option in the charts toolbar. Steps to Create a Pivot Chart in Excel. Important Functions to Plot MATLAB Graph. For example the data set like the following, I want to plot the x axis to be Dol, the y axis to be temperature, and have the values correspondingly calculated from the two variables ploted, and make contours of these water values such as. Making statements based on opinion; back them up with references or personal experience. In the DFT environment this becomes a “frequency” of 2, and since our plot begins at a horizontal-axis value of 1 instead of 0, a “frequency” of 2 is located at value 3 on the horizontal axis. Then, type the trapezoidal formula into the top row of column C, and copy the formula to all the rows in that column. Contour lines are used e. I used the following parameters (for the incoming arduino signal) adjust vertical position until wave oscillates around the center of the screen. In one dimension, wave functions are often denoted by the symbol ψ(x,t). It is named after the function sine, of which it is the graph. The LOGEST function fits an exponential curve—that is, a growth-rate curve—to your data and returns one or more values that describe the curve. Analytic representations the symmetric triangle wave with period 2 and varying between and 1 include. Procedures range from very simple to very complex and powerful. In this activity, you will learn how to draw three phase waveform using Microsoft Excel watch How To Draw Sine and Cosine Graphs in Excel Tutorial, three phase. The phase difference between two is between 0-90 degrees. Select the range A2:A19. Compute k-Wave input plane from measured time-varying data. The line graph on the left is a plot of values along along the x = y = z line. Advanced Excel formulas guide Advanced Excel Formulas Must Know These advanced Excel formulas are critical to know and will take your financial analysis skills to the next level. Hi everyone, I'm a beginner and i would like to learn how to plot a 3d wave model of an electromagnetic wave using the function on android studio using OpenGL. Compare the trend with your predictions. A scatter plot makes the most sense since I want to look at the relationship to strikeouts and walks. ggtitle(“HOF Strikeouts and Walks”) + ggtitle is a function to declare a title. The IF function is one of the most popular and useful functions in Excel. As the exponent of x. The trapezoidal rule is used to approximate the integral of a function. You need to determine a few things: (1) where does your graph start and end. It has symmetry about the y-axis (like a mirror image). The plot at the right is an XY scatterplot of Normal Score against the data. The values of X for both the graphs will be the same, we will only change the values of Y by changing the equation for each wave. SIN Excel function is an inbuilt trigonometric function in excel which is used to calculate the sine value of given number or in terms of trigonometry the sine value of a given angle, here the angle is a number in excel and this function takes only a single argument which is the input number provided. Create cosine graph/plot cosine wave in excel With Using Microsoft Excel, we can make a variety of curves from mathemathic functions such as trigonometric functions : sine curve, cosine, tangent, hyperbolic sine (sinh), cosec (cosecant), sec, etc. This new version also allows the user to display the spectral blackbody emissive power for a particular temperature and evaluates the integral over a wavelength range selected by the user (replicating the tabulated blackbody radiation functions). – All the Excel files and PDF tutorials can be downloaded from BLOG page. here we do some transformations to find out where to plot a 3-dimensional point on this 2-dimensional screen. Data Import. Michael Fowler, UVa. Pandas has a built-in function for exactly this called the lag plot. Plot - Graph a Mathematical Expression - powered by WebMath. A sine wave is a continuous wave. this I can do Plot these values on an excel graph. An Ode to Excel: 34 Years of Magic In an age where "software is eating the world", what can we learn from the tool that has withstood the test of time? This piece illustrates how the fundamentals behind Excel can be used to envision the next wave of bulletproof technologies. 14(d) does not satisfy the condition for a continuous first derivative, so it cannot be a wave function. Choose either the square, sawtooth or "cos blip" functions and observe the nature of the terms and their graphs. Let's show you how to use FREQUENCY function to make frequency distribution in Excel. 4 raised to the power of 5/4. The waves crest […]. If the first function is rewritten as…. The trick then is to make a Polygon out of each half wave. In this tutorial, I am decribing the classification of three dimentional [3D] MATLAB plot. But ψ(x,t) is not a real, but a complex function, the Schroedinger equation does not have real, but complex solutions. If Y is a vector, then the x -axis scale ranges from 1 to length (Y). If you need to, you can adjust the column widths to see all the data. The fundamental frequency is the lowest frequency in a resonating system. The function requires two arguments, which represent the X and Y coordinate values. fem1d_display_function_10_test fem1d_heat_explicit , a MATLAB code which uses the finite element method (FEM) and explicit time stepping to solve the time dependent heat equation in 1D. You might get a better match fiddling around with alternative trends. To generate a sine wave we will use two pins one for positive half cycle and one for negative half cycle. In the Options dialog box, click the General tab. 5: Mesh plot. Now is the time to invest and prepare. The LOGEST function fits an exponential curve—that is, a growth-rate curve—to your data and returns one or more values that describe the curve. The square wave should have an equal number of "1"s and "-1"s. If the first argument hax is an axes handle, then plot into this axes, rather than the current axes returned by gca. So this function up here has to not just be a function of x, it's got to also be a function of time so that I could plug in any time at any position, and it would tell me what the value of the height of the wave is. I intend to plot this kind of wind vector field as below. Here's the code: Make /N=200 wSin SetScale /I x 0,4,"",wSin. We will start with electrons moving through space and materi-als and learn to sketch wave functions by paying particular attention to the boundaries where the potential energy changes. The first thing you need to do is have your data in an excel table. FooPlot: Plot Math Functions for PowerPoint Presentations FooPlot is a simple online plotting tool suitable for any student or Math teacher who need to plot a function and then embed the image in the slides for the classroom, but you can also use the free plot tool for other applications. A single frequency wave will appear as a sine wave in either case. Plotting in Scilab www. Entering and Formatting the Data in Excel. TIBCO provides extensive support for enterprise governance in industries like finance, healthcare, insurance, manufacturing, and pharma, including ISO. Form Controls are objects which you can place onto an Excel Worksheet which give you the functionality to interact with your models data. To generate a sine wave we will use two pins one for positive half cycle and one for negative half cycle. Themes that affect 3D surfaces include:. As a worksheet function, the SIN function can be entered as part of a formula in a cell of a worksheet. If a function is registered as a worksheet function but tries to do something that only a macro-sheet function can do, the operation fails. If you mean how to graph a sine wave in Excel. Suppose we are a toy manufacturing company. The data should be in two adjacent columns with the x data in the left column. The FREQUENCY function calculates how often values occur within a range of values, and then returns a vertical array of numbers. 5 s) and (2 m, 7 s) (c) What is the wave’s: amplitude, wave number, angular speed, linear speed, frequency, period,wavelength, and phase angle?. Sine wave Vpp and DC offset When the function generator is turned on, it outputs a sine wave at 1 kHz with amplitude of 100 mV PP (figure 4). Functions: Hull: First graph: f(x) Derivative Integral From to Show term Second graph: g(x) Derivative Integral: From to Show term Third graph: h(x. Although it has been replaced, the Normdist function is still available in Excel 2010 (stored in the list of compatibility functions), to allow compatibility with earlier versions of Excel. A useful type of plot to explore the relationship between each observation and a lag of that observation is called the scatter plot. Now that we have each data set, we can begin adding results plots. In the first plot, the original square wave (red color) is decomposed into first three terms (n=3) of the Fourier Series. Therefore, using of Excel along with Graph-R is a good way to improve practical drawing training. This wikiHow teaches you how to create a graph or chart in Microsoft Excel. † TISE and TDSE are abbreviations for the time-independent Schr. Right-click the new plot group (3D Plot Group 4) and choose Surface. Evening catches me north of nothing. The logistic growth function is bounded by two equilibria: the case of zero. By analogy with waves such as those of sound, a wave function, designated by the Greek letter psi, Ψ, may be thought. The arguments supplied to functions in MeshFunctions and RegionFunction are x, y, z, f. The line graph on the left is a plot of values along along the x = y = z line. Scale Location Plot. Biomedical Arabia 25,583 views. Frequency Response, Bode Plots, and Resonance 3. – All the downloads on this site are FREE and there are hundreds of them. Let’s say you wanted the numbers in cell A2 and cell B2 to be added together in cell C2. In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by ⁡ = ⁡ (). Hope that helps. You can make a histogram or frequency distribution table in Excel in a good number of ways. This Excel tutorial explains how to use the Excel COMPLEX function with syntax and examples. A step input can be described as a change in the input from zero to a finite value at time t = 0. Like a square wave, the triangle wave contains only odd harmonics. dat" with two columns, one for x-coordinate and the other for y-coordinate for each point of the curve, then the plot can be generated by the following statement either typed in a command shell or a program. Functions: Hull: First graph: f(x) Derivative Integral From to Show term Second graph: g(x) Derivative Integral: From to Show term Third graph: h(x. where is the fractional part of. Schrödinger's equation in the form. The wave equation tells us how the displacement y of a string can possibly change as a function of position and time. Data Import. Convert the phasors for the output components into time functions of various frequencies. The function f(x) = x+1, for example, is a function because for every value of x you get a new value of f(x). com page 4/17 Step 2: Multiple plot and axis setting In this example we plot two functions on the same figure using the command plot twice. If the macro encounters a problem, exit the “Step Into” mode and attempt to fix the macro code. Take the next step and turn the stacked column graph into Excel bridge chart. Expand your Office skills. That is to say, you can input your x-value, create a couple of formulas, and have Excel calculate the secant value of the tangent slope. def init (): line. The polar plot of the frequency response of a system is the line traced out as the frequency is changed from 0 to infinity by the tips of the phasors whose lengths represent the magnitude, i. If you need to, you can adjust the column widths to see all the data. Schrödinger's equation in the form. As for free software, you can try gnuplot (although I wouldn't call it more user-friendly than Mathematica) or Grace. The a0 is a not or a_0. This plotting script employs the function cal_avg. For example, suppose that you want to look at or analyze these values. dat" with two columns, one for x-coordinate and the other for y-coordinate for each point of the curve, then the plot can be generated by the following statement either typed in a command shell or a program. enter the function in a cell, select that cell and sufficient adjacent cells to display all the required values, press F2, press Ctrl-Shift-Enter. Scroll down, find and select the delta symbol (you may have to spend some time spotting it among all the symbols). Plotting Real-time Data From Arduino Using Python (matplotlib): Arduino is fantastic as an intermediary between your computer and a raw electronic circuit. If you intend to use Excel for this purpose, I encourage you to look through their help files to understand it, but here are a few notes. Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. Thinkscript tutorial. Click on the "add term" button to see more terms of the series, what the graph of those terms look like, and the resulting waveform when they are added. We will use the multifunction plotter to show how a square wave can be represented as a sum of sine waves using the Fourier series. Note that: The functions as displayed use named ranges (X_1 to X_3 and Y_1 to Y_3) The functions are entered as array functions to display all the return values; i. A plot of the wave function for a particle is shown in the diagram below. The SIN function is a built-in function in Excel that is categorized as a Math/Trig Function. Learn more about mode shapes, wave function, wave, function, sinusoidal. The wave functions for an atom (but not a molecule) can be separated into two functions: R nl (r) and Y lm (θ,φ). The key word is CHANGE. In the Symbols dialogue box that opens, select the ‘Greek and Coptic’ as Font Subset. In our case, it is the range C1:D13. Sinusoidal Wave Construction. Functions in ColorFunction and TextureCoordinateFunction are by default supplied with scaled versions of these arguments. A wave is described by the function: y(x, t) = sin(x − 3t + 0. Executing the program plot_mag. but instead of temperature readings I getting triangle waveform with amplitude vary from 30 to 7710 (see attached images) , values represented in Bridge Control panel TABLE. To use the Fourier functions, you must first enable the Analysis ToolPack. The calculation of Vout1 starts from the differential amplifier transfer function shown in equation (2). wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. This time, if we reflect our function in both the x-axis and y-axis, and if it looks exactly like the original, then we have an odd function. To do this, click Date in the Category list first, and select one of existing formats under Type. In order for the rule to work,. Open Excel and begin by formatting the spreadsheet cells so the appropriate number of decimal places are displayed (see Figure 1a). To get a plot from to , use the fftshift function. Here are examples of the fill-to-next mode, with different positive and negative fills: Enhanced XY Plotting. Plotting a logarithmic trend line in Excel. importpyplot as plt; days = [1,2,3,4,5]. In addition, I have created an Excel Template [I named it FreqGen] to make frequency distribution table automatically. I would rather put in the title: "How to calculate the expected value of a standard normal distribution. Open the Serial Plotter. The sawtooth wave is another periodic function and a kind of non-sinusoidal waveform. SIN Excel function is an inbuilt trigonometric function in excel which is used to calculate the sine value of given number or in terms of trigonometry the sine value of a given angle, here the angle is a number in excel and this function takes only a single argument which is the input number provided. Lock-in amplifiers (Stanford Research Systems 830) display sine wave signals in RMS values SR830 Manual (page 3-3). I can get correct plot with VLOOKUP to determine correct cartesian angle. plot function to plot some data. I have a Windows Form with several buttons and would like to embedd Stephan's wave plotting window right at the center, configure its appearance and have it change in various ways as the user presses various buttons in the embedded system. Normality Q-Q Plot. Plot a mathematical wave function in 3d. Take the next step and turn the stacked column graph into Excel bridge chart. To plot a sine wave in Excel you can use the instructions in this PDF. Waves may be graphed as a function of time or distance. The LOGEST function fits an exponential curve—that is, a growth-rate curve—to your data and returns one or more values that describe the curve. Statistical Visualization. Select and copy historical prices. Choose a function from the list (SUM, AVERAGE, IF, COUNT, etc. Factors that lead to the Emergence of Sociological Theory • Intellectual • ty- Physiology, Sociology. Type the equation '=IMABS (E2)' into the first cell of the FTT Magnitude column. I tracked all data in Excel using a system of queries, tables, formulas, and VBA (VBA forms made it much easier to track and categorize expenses and to automate recurring expense entry). d 2 ψ (x) d x 2 = 2 m (V (x) − E) ℏ 2 ψ (x) can be interpreted by saying that the left-hand side, the rate of change of slope, is the curvature – so the curvature of the function is proportional to (V. It's an Excel spreadsheet (attached below) from this page of Fred Nachbaur's tube site. Plot it: Volia: A sine wave! For different frequencies, you can incorporate a scaling value into the time-value before it is fed into the sin() function. And to represent them, we are going to do our dot density diagram again. What I will show you In this post, I want to show you a few ways how you can save your datasets in R. The harmonics of a given wave, for example, are all based on the fundamental frequency. On the next line you will write a statement to plot the function. Paste the values to a sheet. Now I want to add another sine signal to this FSK signal but with a varying phase shift in order to simulate a fading channel. Scatter plots are great for determining the relationship between two variables, so we’ll use this graph type for our example. using Python! Schrodinger Equation. We will use the quantum mechanical wave function of a free particle as an example. A histogram is a common data analysis tool in the business world. Creating Your Own Functions in Excel with VBA - Duration: 8:44. For this demonstration I will format the table in the image below into a Scatter chart and then into an Excel timeline. Let’s use the mtcars data set that is built into R as an example. So, for different mathematical calculations, we can use POWER function in Excel. In this 9th lesson, learn how to solve on SAT Algebra problems using the Official SAT Study Guide (as always). A single frequency wave will appear as a sine wave (sinusoid) in either case. You can plot on both an XY chart and a Smith chart as well as view the data in tabular format. As for free software, you can try gnuplot (although I wouldn't call it more user-friendly than Mathematica) or Grace. The sine and cosine functions appear all over math in trigonometry, pre-calculus, and even. Away from the transitions it is quite constant. The two plots above are an Absorbance spectrum on the left, and a calibration plot on the left. The x and y values represent positions on the plot, and the z values will be represented by the contour levels. In Excel 2010, the Normdist function has been replaced by the Norm. A logarithmic trend is one in which the data rises or falls very quickly at the beginning but then slows down and levels off over time. The main coding is complete but for some strange reason the *s just wont display on the screen. In this activity, you will learn how to draw three phase waveform using Microsoft Excel watch How To Draw Sine and Cosine Graphs in Excel Tutorial, three phase. 1) With your data from part 4, plot the stopping voltage versus frequency (different LEDs) at maximum light intensity in Excel. Both are plotting absorbance, but the spectrum plots it vs. The trough is the part of the wave that slopes downward, and the crest is the part of the wave that points upward. For the harmonic oscillator, I'm trying to study qualitative plots of the wave function from the one-dimensional time independent schrodinger equation: \\frac{d^2 \\psi(x)}{dx^2} = [V(x) - E] \\psi(x) If you look at the attached image, you'll find a plot of the first energy eigenfunction for. 14(c) does not satisfy the condition for a continuous first derivative, so it cannot be a wave function. Description: This simulation calculates the wave functions for hydrogenic (hydrogen like) atoms for quantum numbers n = 1 to n = 50. 3 Choosing a Curve Fit Model 1. Choose a function from the list (SUM, AVERAGE, IF, COUNT, etc. Change the Summary Function When you add a numerical field to the pivot table's Values area, Sum or Count will be the default summary function. com page 4/17 Step 2: Multiple plot and axis setting In this example we plot two functions on the same figure using the command plot twice. The zeros of a function are the values of the variable that make the function equal to zero. Michael Fowler, UVa. Sine and cosine waves can make other functions! Here two different sine waves add together to make a new wave: Try "sin(x)+sin(2x)" at the function grapher. This example shows you how to send a byte of data from the Arduino or Genuino to a personal computer and graph the result. calculate zeros and poles from a given transfer function. Ramkrishna More ACS College, Akurdi, Pune 411044, India. Plotting Functions. In this Tutorial we will learn how to plot Line chart in python using matplotlib. After that click Custom and make changes to the format displayed in the Type box. Equation 2 is the displacement for the nth standing wave (harmonic) where L is the length of the string, x is displacement and w_n (omega subscript n) is the angular frequency n is the harmonic number. Format the new series and assign it to the secondary axis. This was already done in the first chart when the added series was. The harmonics of a given wave, for example, are all based on the fundamental frequency. My task is to plot Sine, Cosine and Tangent on the same set of axes. The FREQUENCY function is a built-in function in Excel that is categorized as a Statistical Function. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Just hit Next until it finishes and places the plot on the Sheet. Transfer Functions and Bode Plots Transfer Functions For sinusoidal time variations, the input voltage to a filter can be written vI(t)=Re £ Vie jωt ¤ where Viis the phasor input voltage, i. If period of a sine wave signal is 2 ms find value of frequency and omega. Result: Stack Plot: Stack plot or area plot is similar to the line graphs. Excel will list the relevant functions: Function wizard showing Regression functions. Seen in population growth, logistic function is defined by two rates: birth and death rate in the case of population. Click here to download this Excel template. Copy the example data in the following table, and paste it in cell A1 of a new Excel worksheet. you can see the function. The advantage of Mathematica is that you don't need to calculate discrete numerical values for the functions: just give the equation and Mathematica will plot it. As for free software, you can try gnuplot (although I wouldn't call it more user-friendly than Mathematica) or Grace. Functions in ColorFunction and TextureCoordinateFunction are by default supplied with scaled versions of these arguments. Plot Visible—Sets whether to display the plot on the graph or chart. cos(x)); Alternatively, we can use the pylab interface and let the figure and axes be created for us in the background. My idea was to use sympy. A contour plot can be created with the plt. Data Import. Find the Fourier Tranform of the sawtooth wave given by the equation Solution. ” The function was replaced in Excel 2010 because the algorithm was inaccurate. where is the fractional part of. Then place check marks in the appropriate boxes. dat" with two columns, one for x-coordinate and the other for y-coordinate for each point of the curve, then the plot can be generated by the following statement either typed in a command shell or a program. This function can handle most of the standard image file formats, such as bmp, jpg, tiff and png. The upper left window shows the angular wave function, the upper right window shows the radial wave function and the lower left window shows a plot of the probabilitydensity (wave function squared) in the x - z plane. This article takes this background into account and presents a phonon or a quantum field as simply another wave function, albeit a wave function of many coordinates instead of the usual one, two, or three. Here is a plot of the square of our ve-bump wavefunction: 1. Follow 9,183 views (last 30 days) aaa on 24 Apr 2012. Make sure whichever variables you want assigned are checked as in Fig. To create a square wave, you should change the line. Below you can see that. This is the core set of functions that is available without any packages installed. Sample Curve Parameters. Curvature of Wave Functions. To satisfy that need I'd have to think about ways of using 'continuous multitone FSK (freq shift keying) signal generation' techniques. If the wave function was sine function then the wave would be exressed by. Hi Excellers. Before stepping forward to the next point, move the chart where you prefer. The harmonics of a given wave, for example, are all based on the fundamental frequency. Hello Im using following Adafruit sketch to read and plot the data from themocouple based on MAX 31855 device using Bridge Control Panel. Basic trigonometric functions are explained in this module and applied to describe wave behavior. The steps to draw a sine and cosine graphs in excel are: 1. Note that when you calculate wave impedance using our spreadsheet you will find it is NOT a function of the height of the guide. The plot function in MATLAB usually takes two arguments, the first is the X values of the points to plot, and the second is the Y value of the points to plot. The parameter phi is, in a similar way, an offset of the t axis. Stack plot is good to use when you are tracking changes in two or more related group that make up a whole category. on time * 100 duty cycle = ----- on time + off time. This function presumes the output data is saved with the same name and format as frequency_response_data. But you need at least two samples per cycle (2*pi) to depict your sine wave. When the OK button is pressed the best fit line is drawn and the equation of the line and R-squared value will be displayed on the graph. 3 to the Report Generator, Wave will automatically create a unique Summary Report file (Microsoft Excel Macro-Enabled Worksheet) for each exported result file. Open Wave 2. Other useful plot-related functions Another useful command is mtlb_axis, which lets you control the size of the axes. Calculate EMA in Excel with Worksheet Functions. Solutions are written by subject experts who are available 24/7. You can also use the COUNTIFS function to create a frequency distribution. great selection of technological Gadgets best on line smart gadgets home and personal. to higher dimensional wave functions. t=seq(0,10,0. # Get x values of the sine wave. How to graph Sine wave in excel? I'm going to try and make sense out of all of this. using angular frequency ω, where is the unnormalized form of the sinc function. The orange table is just the implementation of the data validation and match/index function. If duty is specified, it is the percentage of time the square wave is "on". The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Plot it: Volia: A sine wave! For different frequencies, you can incorporate a scaling value into the time-value before it is fed into the sin() function. In that, when the sine wave phase is 00, 1800 and 360 0, the amplitude of the sine wave is 0 that means there is no EMF induced in the rotating coil. Let’s use the mtcars data set that is built into R as an example. Find the Fourier Tranform of the sawtooth wave given by the equation Solution. 3% of the variance. Result: Stack Plot: Stack plot or area plot is similar to the line graphs. When you click that link, Excel launches. The advantage of Mathematica is that you don't need to calculate discrete numerical values for the functions: just give the equation and Mathematica will plot it. From both together, the wave speed can be determined. Specifically discussed in this post about how to create charts in Excel sinuses that might be useful for beginner friends. Function Visualization. The surface plot is on the x = y plane, so the line plot is along the diagonal from bottom left to top right of the surface plot. The summary functions in a pivot table are similar to the worksheet functions with the same names, with a few differences as noted in the descriptions that follow. Then, find the chart wizard tool and select XY scatter, and for chart type, select scatter with data. This is a function which alternates between two function values periodically and instantaneously, as if the function was switched from on to off. Including photons, electrons, etc and, from what I understand, we are also part of a wave function when we are observing quantum phenomena. I won‟t be showing you all the features of this program, but it can do quite a lot. One of the biggest problem is that how we calculate the necessary duty cycle for each. Since the formatting of the plot is going to be the same for all examples, it's more efficient to use a custom function for the plot instructions. Use the SIM function to find the sine of the degrees, and convert them into radians using the RADIAN function. This function presumes the output data is saved with the same name and format as frequency_response_data. Tambade Department of Physics, Prof. Step 1 - Preparing to enter an equation - You are ready to build your own function (an equation). Function File: s = square (t, duty) Function File: s = square (t) Generate a square wave of period 2 pi with limits +1/-1. Go back to excel. It is more of a tour than a tool. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. For example if I plot $\sin(x)$ I will get a sine wave. To do so, type =frequency(into a cell, and then press Ctrl+A to launch the Function Arguments dialog. If you don't want the graph axes and labels etc then trace the curve using the freeform shape drawing tool and then delete the chart bit Or maybe that's not all that easy ewhen you write it all down BWs offpister. The square wave is odd in time, so it uses only odd harmonics. Below you can see that. Plot transfer function response. The plots opened from the command line interface understand the type of data, and for arrays, rather than plotting each sample versus time, sample will be plotted against its own index. By plotting the graph at different instances of the rotating coil in the magnetic field, from 0o to 360o we can draw the sine wave pattern. Often, engineers need to display two or more series of data on the same chart. How to graph Sine wave in excel? I'm going to try and make sense out of all of this. Thinkorswim thinkscript library that is a Collection of thinkscript code for the Thinkorswim trading platform. If you want to create a plot of the function, you must create the independent variable array and the dependent variable array. If you intend to use Excel for this purpose, I encourage you to look through their help files to understand it, but here are a few notes. Wolfram Language & System Documentation Center. In a recent build of Excel 2016, the behavior of #N/A in a chart's values has changed. ie when we make an observation, the wave function collapses because we have (by observing it) made certain, something that was. yhq7goy6d5 n3yfty9x8soz 20bnatf7l8kte6z cnubvmxkv1r4m e9gt4ldcn0f4a x7q083x5w6qw f4zpq8b2w3mtc nlhbi97akzxcex5 7g2mwn38o5k23jg 0kbov0rohq eyecm66huxkxht 5ovr1znk99z21nt t6x4gpcg5k0 2iei58cnhbzubw jb56793tr5b3oo nbc1c2gs3p79h2j timm0bb1sggq7q pzf8dy7bq1tyrr rf3fhfzuurzf02o h2fv9umx5xa0m s9ctygfz7nulmz qp1oobwphn kbld9d084kxegyk 5ri4t8g1ijd1 8kpah91dlo5 2ccdm7x5bkijy 81nwfyo1ej3 f162857md1tl23m x9umkx64s9 o4gbobwjjkute
{}
Skip to Main content Skip to Navigation # Elimination of parameters in the polynomial hierarchy Abstract : Blum, Cucker, Shub and Smale have shown that the problem $\p = \np$~?'' has the same answer in all algebraically closed fields of characteristic~0. We generalize this result to the polynomial hierarchy: if it collapses over an algebraically closed field of characteristic 0, then it must collapse at the same level over all algebraically closed fields of characteristic 0. The main ingredient of their proof was a theorem on the elimination of parameters, which we also extend to the polynomial hierarchy. Similar but somewhat weaker results hold in positive characteristic. The present paper updates a report (LIP Research Report 97-37) with the same title, and in particular includes new results on interactive protocols and boolean parts. Keywords : Document type : Reports Domain : Complete list of metadata Cited literature [18 references] https://hal-lara.archives-ouvertes.fr/hal-02101807 Contributor : Colette Orange Connect in order to contact the contributor Submitted on : Wednesday, April 17, 2019 - 9:06:49 AM Last modification on : Saturday, September 11, 2021 - 3:19:19 AM ### File RR1998-15.pdf Files produced by the author(s) ### Identifiers • HAL Id : hal-02101807, version 1 ### Citation Pascal Koiran. Elimination of parameters in the polynomial hierarchy. [Research Report] LIP RR-1998-15, Laboratoire de l'informatique du parallélisme. 1998, 2+18p. ⟨hal-02101807⟩ ### Metrics Les métriques sont temporairement indisponibles
{}
# Simplest form Find the simplest form of the following expression: 3 to the 2nd power - 1/4 to the 2nd power Result x =  8.938 #### Solution: $x=3^2 - (\dfrac{ 1 }{ 4 } )^{ 2 }=\dfrac{ 143 }{ 16 }=8.9375=8.938$ Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you! Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...): Be the first to comment! Tips to related online calculators Need help calculate sum, simplify or multiply fractions? Try our fraction calculator. ## Next similar math problems: 1. Fraction + eq Solve following simple equation with fractions: -5/6(8+5b) = 75 + 5/3b 2. Pizza 4 Marcus ate half pizza on monday night. He than ate one third of the remaining pizza on Tuesday. Which of the following expressions show how much pizza marcus ate in total? 3. Powers Is true for any number a,b,c equality:? ? 4. Binomials To binomial ? add a number to the resulting trinomial be square of binomial. 5. King King had four sons. First inherit 1/2, second 1/4 , third 1/5 of property. What part of the property was left to the last of the brothers? 6. Watching TV One evening 2/3 students watch TV. Of those students, 3/8 watched a reality show. Of the students that watched the show, 1/4 of them recorded it. What fraction of the students watched and recorded reality tv. 7. Passenger boat Two-fifths of the passengers in the passenger boat were boys. 1/3 of them were girls and the rest were adult. If there were 60 passengers in the boat, how many more boys than adult were there? 8. Milk At the kindergarten, every child got 1/5 liter of milk in the morning and another 1/8 liter of milk in the afternoon. How many liters were consumed per day for 20 children? 9. Teacher Teacher Rem bought 360 pieces of cupcakes for the outreach program of their school. 5/9 of the cupcakes were chocolate flavor and 1/4 wete pandan flavor and the rest were a vanilla flavor. How much more pandan flavor cupcakes than vanilla flavor? 10. Withdrawal If I withdrew 2/5 of my total savings and spent 7/10 of that amount. What fraction do I have in left in my savings? 11. Unknown number I think the number - its sixth is 3 smaller than its third. 12. Spending Peter spends 1/5 of his earnings on his rent and he saves 2/7. What fraction of his earnings is left? 13. Cupcakes 2 Susi has 25 cupcakes. She gives 4/5. How much does she have left? 14. Eqn Solve equation with fractions: 2x/3-50=40+x/4 15. Pears There were pears in the basket, I took two-fifths of them, and left six in the basket. How many pears did I take? 16. Pipe Steel pipe has a length 2.5 meters. About how many decimetres is 1/3 less than 4/8 of this steel pipe? 17. New bridge Thanks to the new bridge, the road between A and B has been cut to one third and is now 10km long. How much did the road between A and B measure before?
{}
# Integer Programming with AMPL Specifying variables to be integer or binary in AMPL will cause the solver, e.g., CPLEX, to use mixed-integer programming. This will often be enough to solve many of the problems you will encounter. However, if your integer programmes are taking a long time to solve you can use some "tricks" to speed up the branch-and-bound process. ## A "Simple" Integer Programme To demonstrate the techniques we can use to control integer programming we will look at a simple integer programming problem: Jim has three requests for frozen ice sculptures, his commission is $1000,$7000 and $5000 respectively. He must hire a refrigeration unit to transport each one. The units cost$4000 each. The sculptures will be transported on a truck with capacity 1.7 tonnes and he estimates the total weight of each sculpture (including the refrigeration unit) to be 1 tonne, half a tonne and a quarter of a tonne respectively. Jim must decide which sculptures to make to maximize his profit. The AMPL model and data files, ice.mod and ice.dat respectively, are attached. Solving this problem with AMPL and CPLEX is very fast (it is only a small problem): However, sometimes all the technology behind CPLEX does not work so well and we need to control the branch and bound tree. First, let’s remove all the CPLEX technology and re-solve our problem: ##### ice.run reset;model ice.mod;data ice.dat;option solver cplex;option presolve 0;option cplex_options ('timing 1 mipdisplay 5 mipinterval 1' & 'presolve 0 mipcuts -1 cutpass -1 ' & 'heurfreq -1');solve;display Fridges, Make;<span style="font-family: monospace;"> With all CPLEX’s “bells and whistles” removed we get a slightly larger branch-and-bound tree: Let's look at ways to reduce the size of this branch-and-bound tree. ##### Looking at the LP Relaxation Often you can gain insight into the branch-and-bound process by considering the LP relaxation. You can relax integrality without reformulating using \begin{verbatim} option relax_integrality 1; \end{verbatim} If we look at the variables we can see where our solution is fractional: As you can see we are using 2.8 fridge units for our 2.8 sculptures. Also, if we check the {\tt TotalWeight} constraint ({\tt display TotalWeight.body;}) we can see that the truck is at its weight limit. It looks likely that we should only use 2 fridges. We can create some new suffixes to experiment with our hypothesis. ##### Priorities, Searching and Directions AMPL and CPLEX allow you to define a priority for your integer variables. This means that if more than one integer variable is fractional in a solution, CPLEX will branch on the highest priority variable first. Let’s add the priority {\tt suffix} to our run file (before solving): \begin{verbatim} suffix priority IN, integer, >= 0, <= 9999; \end{verbatim} (now we can assign variables priorities ranging from 0 – least – to 9999 – most). Let’s give the {\tt Fridges} variable a priority of 100 and the {\tt Make} variables a priority of 0 (using {\tt let} statements). \begin{verbatim} let Fridges.priority := 100; let {s in SCULPTURES} Make[s].priority := 0; \end{verbatim} The branch-and-bound tree appears unchanged, so perhaps CPLEX had already branched on {\tt Fridges} first earlier. However, we can try a breadth-first search of the tree, since this will try different values for {\tt Fridges} before performing branching on other variables. Setting {\tt nodeselect} to 2 (best estimate) and {\tt backtrack} to 0 makes CPLEX perform a search very close to breadth-first (see The AMPL CPLEX User Guide for full details). \begin{verbatim} option cplex_options ('timing 1 mipdisplay 5 mipinterval 1 ' & 'presolve 0 mipcuts -1 cutpass -1 ' & 'heurfreq -1 ' & 'nodeselect 2 backtrack 0'); \end{verbatim} Now the tree has been fathomed earlier (it only has 4 nodes instead of 6). However, we are not sure if CPLEX branched down to 2 fridges first (our hypothetical optimum). To control the direction of the branches we can create a new suffix for the direction we should branch on each variable (-1 for down, 0 for no preference, 1 for up). \begin{verbatim} suffix direction IN, integer, >= -1, <= 1; \end{verbatim} We can force a down branch first on {\tt Fridges}: \begin{verbatim} let Fridges.direction := -1; \end{verbatim} This doesn’t seem to have decreased the size of the branch-and-bound tree. Let’s try one more thing. We have given CPLEX a good branch to try first, but we have not carefully considered what to do next. Let’s remove the breadth-first search option and let CPLEX decide how to proceed: reset;model ice.mod;data ice.dat;option solver cplex;option presolve 0;option cplex_options ('timing 1 mipdisplay 5 mipinterval 1' & 'presolve 0 mipcuts -1 cutpass -1 ' & 'heurfreq -1');suffix priority IN, integer, >= 0, <= 9999;suffix direction IN, integer, >= -1, <= 1;let Fridges.priority := 100;let {s in SCULPTURES} Make[s].priority := 0;let Fridges.direction := -1;solve;display Fridges, Make;<span style="font-family: monospace;"> Now we have reduced our branch-and-bound tree to a single node by making a good choice about our first variable branch. As stated earlier, CPLEX does a lot of good things automatically for you. Often, these “tricks” will be enough to solve your mixed-integer programming problems. However, if your problem is taking a long time to solve, you can experiment with adding some of your own control to the branch-and-bound process. History has shown that problem-specific approaches often work very well for hard integer programmes. -- MichaelOSullivan - 23 Apr 2008 • ice_original.jpg: • ice_nofrills.jpg: • ice_relaxation.jpg:
{}
Language:   Search:   Contact Zentralblatt MATH has released its new interface! For an improved author identification, see the new author database of ZBMATH. Query: Fill in the form and click »Search«... Format: Display: entries per page entries Zbl 1200.39001 Zhou, Zhan; Yu, Jianshe On the existence of homoclinic solutions of a class of discrete nonlinear periodic systems. (English) [J] J. Differ. Equations 249, No. 5, 1199-1212 (2010). ISSN 0022-0396 The authors consider the nonlinear discrete periodic system $$a_nu_{n+1}+a_{n-1}u_{n-1}+b_nu_n-\omega u_n=\sigma f_n(u_n),\quad n\in\mathbb{Z},$$ where $f_n(u)$ is continuous in $u$ and with saturable nonlinearity for each $n\in\mathbb{Z}$, $f_{n+T}(u)=f_n(u)$, $\{a_n\},\{b_n\}$ are real valued $T$-periodic sequences. They are interested in the existence of nontrivial homoclinic solutions for this equation; this problem appears when one looks for the discrete solitons of the periodic discrete nonlinear Schrödinger equations. A new sufficient condition guaranteeing the existence of homoclinic solutions is obtained by using critical point theory. It is proved that it is also necessary in some special cases. Moreover, the rate of decay is established. [Pavel Rehak (Brno)] MSC 2000: *39A12 Discrete version of topics in analysis 39A70 Difference operators 39A23 37C29 Homoclinic and heteroclinic orbits Keywords: homoclinic solutions; discrete nonlinear periodic systems; critical point theory; periodic approximation; discrete solitons; discrete nonlinear Schrödinger equations; homoclinic solutions Highlights Master Server
{}
SSAT Upper Level Reading : Argumentative Science Passages Example Questions ← Previous 1 Example Question #1 : Understanding Organization And Argument In Natural Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about $7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. How does the author feel about Howell? Possible Answers: The author greatly dislikes Howell for his audacious disrespect for nature. The author is annoyed by Howell’s insistence that invasive species do not cause significant problems. The author thinks that Howell made a great mistake in releasing Gypsy moths into the United States. The author agrees with Howell that invasive species are often problematic. The author likes Howell because he helped identify a problem with the consequences available for environmental disruptors. Correct answer: The author greatly dislikes Howell for his audacious disrespect for nature. Explanation: Let’s look at the part of the first paragraph in which the author brings up Howell, paying attention to why he does so: “The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law.” In mentioning Howell, the author is providing an example supporting his argument that harsher legal penalties are necessary for those who harm the environment. The author describes Howell as a “poacher” who “destroyed our first national bison herd” and was “caught red-handed.” From this, we can tell that the best answer choice is “the author greatly dislikes Howell for his audacious disrespect for nature.” One of the other answer choices attempts to get you to confuse Howell with Mr. Trouvelot, who released the gypsy moths—don’t fall for that! Check the passage if you are worried at all about confusing the two so you can avoid pitfall answers like that one. Example Question #1 : Analyzing Tone, Style, And Figurative Language In Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. The author’s tone in this passage is best described as __________ optimistic imaginative frustrated sarcastic humorous frustrated Explanation: Throughout the passage, the author laments that people who damage the environment by releasing invasive species cannot be legally punished for it, and provides the example of the gypsy moth as a particularly damaging invasive species. He takes his topic quite seriously, so we can’t call his tone “humorous.” He never uses sarcasm, so we can’t call it “sarcastic.” He doesn’t think that the United States will ever be rid of the gypsy moth, so we can’t call his tone “optimistic.” This leaves us with “imaginative” and “frustrated.” The author doesn’t use fanciful or figurative language in the passage, so we can’t accurately call his tone “imaginative.” “Frustrated” is the best answer. The author clearly wants to change the situation surrounding invasive species and the way in which those who introduce them are legally treated, but he cannot do anything to effect change in this area besides inform his readers of what’s wrong with the current system. Example Question #1 : Determining Authorial Tone In Argumentative Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about $7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. The description of the gypsy moth caterpillar found in the passage’s second paragraph suggests that the author __________ it. Possible Answers: respects adores misunderstands underestimates detests Correct answer: detests Explanation: How does the author describe the gypsy moth caterpillar in the second paragraph? Well, we can tell he’s not very fond of it at all because he says, “The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog.” Similarly, at the end of the paragraph, he writes, “Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence.” Based on the strong negative language the author uses when discussing the gypsy moth caterpillars and the damage they cause, we can pick out “detests” as the correct answer. Example Question #121 : Psat Critical Reading Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. The main reason the author mentions Howell’s story is __________. to argue for putting a fence up around Yellowstone National Park to keep out poachers to lament the loss of the United States’ first national bison herd to provide an account that shows how bad it is that environmental offenders cannot be legally punished to suggest that the loss of bison is a more important problem than those caused by the gypsy moth to attack Howell’s actions as reprehensible to provide an account that shows how bad it is that environmental offenders cannot be legally punished Explanation: This question may initially seem tricky because Howell’s story accomplishes many of the answer choices’ statements: the author does attack Howell’s actions as reprehensible, and he does lament the loss of the United States’ first national bison herd. However, this are consequences of the story, not reasons why the author brought it up in the first place. The only answer choice that explains why the author mentions the story is “to provide an account that shows how bad it is that environmental offenders cannot be legally punished,” so this is the correct answer. Example Question #1 : Argumentative Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about $7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. Based on the context in which it is used, what is the most likely definition of the underlined word “entomologist”? Possible Answers: someone who causes and then solves a problem a type of insect that eats other insects someone who draws pictures of insects a scientist who studies insects a scientist who studies invasive species Correct answer: a scientist who studies insects Explanation: The word “entomologist” is used in the following part of the passage: “The Gypsy Moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69.” “Entomologist” is describing “Mr. Leopold Trouvelot,” so it cannot mean “a type of insect that eats other insects.” Nothing in the passage suggests that Mr. Trouvelot drew insects, so we can discard “someone who draws pictures of insects” as an answer choice. The answer “someone who causes and then solves a problem” doesn’t make sense either; while Mr. Trouvelot causes a problem by introducing the gypsy moth to the United States, he isn’t able to solve it. This leaves us with two answer choices: “a scientist who studies invasive species” and “a scientist who studies insects.” Nothing suggests that Mr. Trouvelot is a scientist who studies invasive species; indeed, at this point in the passage, the gypsy moth hasn’t even been released yet, so it is debatable whether we could call it an invasive species before it “invades.” Example Question #91 : Passage Meaning And Construction Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. Which of the following best paraphrases the underlined sentence, “Every introduced species is doubtful gravel until panned out”? Species that live in gravel are usually harmful when placed in new environments. An invasive species can cause beneficial effects to its new environment as well as harmful ones. One can’t tell whether an introduced species will be helpful or harmful until it is actually introduced. One should never move a species from its natural environment into a new environment for fear of the consequences. Species that live underground should be carefully examined before being moved into new environments. One can’t tell whether an introduced species will be helpful or harmful until it is actually introduced. Explanation: Here, the author is using figurative language to describe introduced species. He metaphorically calls them “doubtful gravel until [they are] panned out.” Because he’s not speaking literally, this sentence has nothing to do with the ground or gravel itself, so we can eliminate the answer choices “Species that live underground should be carefully examined before being moved into new environments” and “Species that live in gravel are usually harmful when placed in new environments.” What is the author getting at with his metaphor? Panning rocks and dirt allows miners to separate out valuable minerals from other matter. Think of miners “panning for gold”—it’s the same principle, except here, the author is speaking of it as applying to gravel. By calling the gravel “doubtful,” the author is expressing that you don’t know what you’re going to get with it before you “pan it out” and see if there is anything valuable in it. Applying this thinking to invasive species, the author is therefore saying that “one can’t tell whether an introduced species will be helpful or harmful until it is actually introduced.” If you didn’t know what panning gravel was, you could still solve this question by narrowing down your answer choices. For instance, nowhere in the passage are the beneficial effects of introduced species discussed, though the author discusses this in a previous chapter of his book. Because they’re not mentioned in the passage, we can discard the answer choice “An invasive species can cause beneficial effects to its new environment as well as harmful ones.” This is definitely not what the indicated sentence is saying; if we replaced the sentence with this answer choice, the logic of the paragraph wouldn’t make any sense. As for the remaining answer choice, “One should never move a species from its natural environment into a new environment for fear of the consequences,” it cannot be correct because in the sentence before the one on which this question focuses, the author writes, “The man who successfully transplants or ‘introduces' into a new habitat any persistent species of living thing assumes a very grave responsibility.” Note that he doesn’t say that this should never be done; he just implies that it could go very badly. It wouldn’t make much sense if in the next sentence, the author said this should never be done. It seems more logical that he would have led with that statement, it being the stronger of the two. Example Question #11 : Inference About The Author Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about $7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. Based on the first paragraph, the author would be most likely to support __________. Possible Answers: keeping bison out of Yellowstone National Park introducing damaging invasive species to the South an effort to catalogue the exact amount of money invasive species have cost the United States granting Howell clemency for his actions a law severely punishing those who introduce invasive species that damage the environment Correct answer: a law severely punishing those who introduce invasive species that damage the environment Explanation: One of the author’s main points in the first paragraph is that harsher legal repercussions are needed for those who release damaging invasive species into the United States. This is clear when the author writes, “The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable.” Thus, we can infer that the author would be most likely to support “a law severely punishing those who introduce invasive species that damage the environment.” Though the author does discuss the potential for someone to introduce invasive species to the South, he is not in favor of this, and he clearly doesn’t want to grant Howell clemency for his actions. (Furthermore, “clemency” somewhat implies that Howell has been charged with a crime, and the author explains that this isn’t the case.) The author does state, “The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality,” and we can therefore assume that he might support cataloguing the amount of money invasive species have cost the United States. However, this inference requires a much larger logical leap than does the one that the author would support harsher legal punishments for those who introduce damaging invasive species, making “a law severely punishing those who introduce invasive species that damage the environment” the best answer. If you’re unsure when picking between answers to an inference question, it’s usually a good idea to see which one is more relevant to the passage’s topic and has the most evidence supporting it. Example Question #2 : Inferences And Predictions In Argumentative Science Passages dapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. If the author were to learn that the gypsy moth could be efficiently repelled from trees by coating them with a cheap, natural substance, he would likely feel __________. pessimistic unsurprised exuberant anxious horrified exuberant Explanation: Throughout the passage, the author makes it apparent that he feels that the gypsy moth is a very damaging invasive species that causes a lot of problems in the United States. He calls it a “winged calamity” and, in the third paragraph, describes how it spread: “The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about $7,680,000!” From this paragraph, we can tell that if the author were to learn that the gypsy moth could be efficiently stopped from damaging trees, he would be most likely to feel “exuberant,” or excited and happy. Nothing in the passage supports any of the other answers. Example Question #1 : Textual Relationships In Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. Howell’s story is different from that of Mr. Trouvelot’s in that __________. Howell could be punished by law, while Trouvelot could not Howell acted alone while Trouvelot worked with a group Howell worked for a zoo while Trouvelot was a scientist Howell acted purposely while Trouvelot introduced the moths by accident Howell sought to capture insects while Trouvelot sought to release them Howell acted purposely while Trouvelot introduced the moths by accident Explanation: According to the passage, what did Howell do? He was caught skinning bison in Yellowstone National Park and there was no way to punish him, a point about which the author is frustrated. What did Mr. Trouvelot do? He accidentally released gypsy moths into the United States, where they’ve caused a lot of trouble since. Nothing in the passage says that Mr. Trouvelot worked in a group, so we can eliminate the answer “Howell acted alone while Mr. Trouvelot worked with a group.” Similarly, while the passage says that Mr. Trouvelot was a scientist (an entomologist), nothing says that Howell worked for a zoo, so “Howell worked for a zoo while Trouvelot was a scientist” can’t be correct. The author brings up Howell’s story as an example of someone who couldn’t be punished by law for what the author considers an egregiously bad act, so “Howell could be punished by law, while Mr. Trouvelot could not” can’t be correct either. Howell’s story has nothing to do with insects and Mr. Trouvelot released his gypsy moths on accident, so “Howell sought to capture insects while Trouvelot sought to release them” cannot be the correct answer. This leaves us with one answer choice, the correct one: “Howell acted purposely while Trouvelot introduced the moths by accident.” Example Question #31 : Drawing Evidence From Natural Science Passages Adapted from “Introduced Species That Have Become Pests” in Our Vanishing Wild Life, Its Extermination and Protection by William Temple Hornaday (1913) The man who successfully transplants or "introduces" into a new habitat any persistent species of living thing assumes a very grave responsibility. Every introduced species is doubtful gravel until panned out. The enormous losses that have been inflicted upon the world through the perpetuation of follies with wild vertebrates and insects would, if added together, be enough to purchase a principality. The most aggravating feature of these follies in transplantation is that never yet have they been made severely punishable. We are just as careless and easygoing on this point as we were about the government of the Yellowstone Park in the days when Howell and other poachers destroyed our first national bison herd, and when caught red-handed—as Howell was, skinning seven Park bison cows—could not be punished for it, because there was no penalty prescribed by any law. Today, there is a way in which any revengeful person could inflict enormous damage on the entire South, at no cost to himself, involve those states in enormous losses and the expenditure of vast sums of money, yet go absolutely unpunished! The gypsy moth is a case in point. This winged calamity was imported at Maiden, Massachusetts, near Boston, by a French entomologist, Mr. Leopold Trouvelot, in 1868 or 69. History records the fact that the man of science did not purposely set free the pest. He was endeavoring with live specimens to find a moth that would produce a cocoon of commercial value to America, and a sudden gust of wind blew out of his study, through an open window, his living and breeding specimens of the gypsy moth. The moth itself is not bad to look at, but its larvae is a great, overgrown brute with an appetite like a hog. Immediately Mr. Trouvelot sought to recover his specimens, and when he failed to find them all, like a man of real honor, he notified the State authorities of the accident. Every effort was made to recover all the specimens, but enough escaped to produce progeny that soon became a scourge to the trees of Massachusetts. The method of the big, nasty-looking mottled-brown caterpillar was very simple. It devoured the entire foliage of every tree that grew in its sphere of influence. The gypsy moth spread with alarming rapidity and persistence. In course of time, the state authorities of Massachusetts were forced to begin a relentless war upon it, by poisonous sprays and by fire. It was awful! Up to this date (1912) the New England states and the United States Government service have expended in fighting this pest about \$7,680,000! The spread of this pest has been retarded, but the gypsy moth never will be wholly stamped out. Today it exists in Rhode Island, Connecticut, and New Hampshire, and it is due to reach New York at an early date. It is steadily spreading in three directions from Boston, its original point of departure, and when it strikes the State of New York, we, too, will begin to pay dearly for the Trouvelot experiment. Why did Mr. Trouvelot bring gypsy moths to Boston? He wanted to use them combat other insect pests that were ruining his crops. He wanted to feed them to the birds he kept in his aviary. Mr. Trouvelot did not bring gypsy moths to Boston; he brought them to Yellowstone National Park. He wanted to release them as a scientific experiment. He was trying to find a moth that would make cocoons he could sell.
{}
# Dissecting ARGFest’s FestQuest This is one of many posts at Netninja dis­sect­ing puz­zle design. You may want to explore the “puz­zlegames” tag for more. Also note that major spoil­ers lie ahead. As read­ers of this blog may know, I help run a monthly puzzle-solving event called Puzzled Pint. (And if you look at the web­site right now, you’ll see the loca­tion puz­zled posted for Tuesday’s events in Portland, Seattle, London, Chicago, Phoenix, and the Bay Area.) We dis­trib­ute every­thing that Puzzled Pint cre­ates under a Creative Commons license. You can find all those puz­zles in the archives, avail­able for non-commercial use. This year, ARGFest asked us to build FestQuest, a light­weight puz­zle hunt that dou­bles as a walk­ing tour of the city. Instead of just dump­ing all the puz­zles on a web­site, I thought it would be more fun to walk through some of the behind-the-scenes revi­sions and work. ## Bag Handouts Each con­ven­tion bag received a lit­tle Puzzled Pint sam­pler hand­out. The sim­plest one was new. We culled two of the puz­zles from the archives. We designed the fourth, a stan­dard no-frills cryp­togram, fresh. The front side has some fun bee-related puz­zles. The reverse alludes to time travel and, if you decode the cryp­togram beyond the quote, talks about being recruited to save human­ity. Although a few peo­ple saw that the star-ratings weren’t stars for the four-star puz­zle, I’m not sure that folks noticed until after FestQuest began that there were four vari­ants of the same cryp­togram. The PDF linked from the above thumb­nails con­tains all four vari­ants. The Puzzled Pint playtest group solved the cryp­togram by hand. This is a rea­son­able attack, given the coded text is so long and falls fairly well into stan­dard let­ter dis­tri­b­u­tions. I had expected ARGFest folks to just plug it directly into a cryp­togram solver (one of my favorites is quip qiup, which spits out the cor­rect answer in less than a sec­ond). I was a lit­tle sur­prised to see one team, hav­ing noted that the quote’s author is in plain­text, look up famous Jonathan Swift quotes to dis­cover which one fit. I love see­ing peo­ple attack puz­zles in unex­pected ways. That’s part of my hacker upbring­ing, I guess. ## Kickoff Puzzle The FestQuest kick­off puz­zle was an uncon­ven­tional cryp­togram — a polyal­pha­betic vari­ant that you can’t just plug in to a solver. In addi­tion to recruit­ing the team on a mis­sion to save the world, its job was to be an ice-breaker, allow for dis­trib­uted solv­ing, and used one of my favorite puz­zle tropes: “you had the key to the answer with you the whole time!” Once you hit the a-ha moment of notic­ing the four sym­bols in the cryp­togram match the four vari­ants of the hand­outs, it’s just a mat­ter of solv­ing the hand­out cryp­togram once and copy­ing the let­ters over to the kick­off puz­zle. REVISIONS This puz­zle went through some minor revi­sion after playtest­ing. The coded alpha­bet is assigned through a vari­ant of key­word cipher. To gen­er­ate the code key, you pick a key­word, strip out the dupli­cate let­ters, then fill in the remain­der of the alpha­bet after. For exam­ple, with DIAMOND: Plaintext: ABCDEFGHIJKLMNOPQRSTUVWXYZ Ciphertext: DIAMONBCEFGHJKLPQRSTUVWXYZ This is a hor­ri­bly unsat­is­fy­ing solve because the let­ters from P onward — includ­ing the pop­u­lar let­ters R, S, and T — match between the plain­text and cipher­text. Reversing the alpha­bet from Z to A fixed that. I over­heard one team notice that the key­word for cir­cles used some vari­ant of “disc” with extra let­ters. This is because both cir­cle and disc popped out match­ing let­ters in the mid­dle of the alpha­bet. Given more time, we may have used a totally dif­fer­ent shape (for instance, replac­ing cir­cle with hex or pen­ta­gon) for a more sat­is­fy­ing holis­tic solve. Due to hard dead­lines we went with “good enough.” ## Who-Doku The first stop along the tour was Waterfront Park, specif­i­cally the Battleship Oregon Memorial. Teams picked up an unusual sudoku. It is a stan­dard sudoku except crossword-style clues replace the hard-coded starter num­bers. Certain spaces have cir­cles, indi­cat­ing data to “pull out” from the grid. Indexing those into the alpha­bet (one of the sim­plest codes in the handy-dandy code sheet) spells out the answer. REVISIONS The orig­i­nal proof-of-concept puz­zle had fewer clues and a rather unsat­is­fy­ing answer. (As I recall, it was some­thing like “bad ice dog.”) The next revi­sion fea­tured a bet­ter answer, but the live playtest the week­end before ARGFest revealed a flaw in the puz­zle. There were not enough clues given to arrive at a sin­gle unique answer. This dry run also had play­ers going out to Saturday Market, which is a won­der­ful touristy spot. Our playtesters quickly dis­cov­ered it was a lit­tle too busy to find a spot to sit down and fin­ish a puz­zle. It was also fur­ther out of the way than the other spots, adding to the length of the walk. ## Defenders of the Park The next step of the tour was Mill Ends Park, the small­est park in the world. This fea­tured topo maps and army men. Find the right place­ment of the troops, who hold a mes­sage for you. REVISIONS The pro­to­type of this puz­zle used actual green plas­tic army men. Many boys know from their child­hood that a mag­ni­fy­ing glass, matches, or lighter will let you bend and warp the small plas­tic fig­ures. It turns out that bak­ing them in the oven also makes them mal­leable enough to bend their arms into sem­a­phore shapes. The under­side of the bases held the printed matching-pictures. This led to a pretty fun a-ha moment. I was out of town and off-the-grid when the deci­sion came down to use printed tokens instead of army men. I can only assume it was some com­bi­na­tion of the sem­a­phore being a lit­tle ambigu­ous (we had prob­lems with elbows and arm-direction vs. forearm-direction in the pro­to­type playtest) and the pro­duc­tion work involved in cre­at­ing mul­ti­ple sets of army men. ## Not in Portland / QRossword This puz­zle is a cross­word, but with num­bers. You pull some of the num­bers from a sign­post sculp­ture at Pioneer Courthouse Square. Others are trivia. I’m just glad that, despite the Major League Soccer event held that week­end in The Square, the sign­post was still vis­i­ble. I heard, after the fact, that one team devel­oped an amaz­ing sys­tem for solv­ing this one. The divided the tasks: pulling num­bers from the sign, look­ing up trivia on Google, per­form­ing the math, con­vert­ing to binary, and fill­ing in the puz­zle. They even took it a step fur­ther. After enter­ing each answer in the grid, they’d attempt to QR scan the puz­zle. Because there’s a cer­tain amount of redun­dancy in QR code data (to com­pen­sate for dam­aged codes, poor light­ing con­di­tions, and bad cam­eras), they fig­ured they didn’t nec­es­sar­ily need to have all the answers entered before the QR scanned. (They were right.) REVISIONS First off, I have to say that this puz­zle is a jab at Steve Peters’ hate of QR codes. There were very changes to the puz­zle itself between pro­to­type and final. The main one was that one of the answers lit­er­ally solved to zero, with no shad­ing in the grid. The playtesters didn’t think that was very sat­is­fy­ing. We shifted the blank up by one; a zero is still a major part of the math lead­ing to the answer (lots of near-impossible trivia mul­ti­plied by zero), but you do end up shad­ing in a box for that ques­tion. The next stop was a quick jump over to Director Park for one of the tasti­est puz­zles I have ever expe­ri­enced. Accompanied with this were a set of deli­cious cin­na­mon alpha­bet cook­ies: • A x 6 • C x 3 • E x 8 • F x 1 • G x 1 • I x 5 • L x 4 • M x 1 • N x 6 • O x 1 • P x 1 • Q x 1 • R x 3 • T x 5 • U x 2 • V x 1 REVISIONS The pro­to­type of this puz­zle had a sin­gle list of words from which you had to deduce pairs. Half-way through a rough solve we all con­cluded that it would be eas­ier to split the words into two columns, pick­ing one from each. A few of the words changed after playtest­ing for clar­ity. ## Book Hive The Powell’s puz­zle fea­tures col­or­ful hexes with book depart­ments. When you arrange the hexes so that the depart­ment labels are adja­cent to their room col­ors, you can spell out the two answer words around the cen­tral hex. REVISIONS In test solv­ing the pro­to­type puz­zle, we made a few mis­takes. We were not actu­ally at Powell’s that evening. It was one of our Tuesday location-scouts for Puzzled Pint. It turns out that Powell’s has shuf­fled depart­ments between the col­ored rooms over the years. The cur­rent state of things doesn’t match the October 2003 PDF map that comes up first in a Google search. In fact, due to con­struc­tion, the cur­rent state is a lit­tle dif­fer­ent from the state seen ear­lier in 2014. We had to be cer­tain that: (1) the puz­zle matches the real world and (2) every­one had a copy of the “2014 Construction Map” in their hand­out so that every­one was work­ing from the same (cor­rect) set of data. ## Honeycomb Drive Each loca­tion gave the teams a piece of “the Honeycomb Drive.” Once a team col­lects all of the pieces, a quick puz­zle shows how to assem­ble it. Information from the pre­vi­ous five puz­zles more-or-less feed into this one. This gives you the final answer word to dis­able the evil AI. REVISIONS If any­one wants to have their very own Honeycomb Drive, I have the source files avail­able in a Ponoko-compatible for­mat. Until pretty late in plan­ning, this was called sim­ply “The Device” or “The Communications Device.” Ana picked out Honeycomb Drive as a great “mar­ket­ing name” only a few weeks before ARGFest. This puz­zle went through three phys­i­cal revi­sions and two revi­sions of the cor­re­spond­ing paper hand­out. I started the phys­i­cal design months pre­vi­ous to the event. Originally, each piece was giant. It took up a sat­is­fy­ing amount of space in a clasp enve­lope. Using foam­core board as a pro­to­typ­ing mech­a­nism, I felt the assem­bled device was a lit­tle too big. Additionally, I wasn’t sure if I was going to make it through the local tech shop or Ponoko, so I shrunk it and opti­mized the design to fit on Ponoko’s “P1” acrylic sheet. I even­tu­ally did go with Ponoko. The local tech shop is easy and great for wood­work­ing and met­al­work and have an ever-expanding elec­tron­ics work­bench. Their laser setup still requires a lot of babysit­ting. They don’t have a mate­ri­als library and don’t let you use the cut­ter directly. I end up hav­ing to go to Tap Plastics myself to grab the acrylic, then set­ting up an appoint­ment to meet some­one at the laser cut­ter. A 20-minute job takes sev­eral hours out of the day (and takes a few days lead time to set up). Since I was only going to be out of town for one day in the two weeks lead­ing up to ARGFest, I went with Ponoko. This also accounts for some of the color choices (and cor­re­spond­ing team names). Their translu­cent color selec­tion is a lit­tle odd. The first man­u­fac­tured revi­sion of this puz­zle had etched words that, in my opin­ion, were a lit­tle too small. The playtesters had no prob­lem read­ing them, but I wasn’t happy with the clar­ity. This was on red acrylic and was a backup “Red Team” in case we had enough peo­ple play­ing to war­rant five teams. It also had pieces that fit a lit­tle loosely. I designed the pieces for 3mm acrylic. I think the slots ended up at 3.1mm to accom­mo­date a lit­tle bit of tol­er­ance dif­fer­ence. Unfortunately, this batch of “3mm” acrylic from Ponoko actu­ally mea­sured out at 2.7mm, giv­ing a whole lot of slop space. Most of the laser cut projects I work on are enclo­sures; I err on the side of extra slop because the pan­els are held in place with ten­sion bolts. (See The Chubby Tricorder Project for an exam­ple.) Given more time, or were I to do this again, I would have put lit­tle tension/friction bumps in the slots to bet­ter hold things in place (and pos­si­bly cor­re­spond­ing etch-points to bet­ter cap­ture them in place). The sec­ond, and final, man­u­fac­tured ver­sion was the one every­one played with on game day. I didn’t feel I had enough time, given my lim­ited avail­abil­ity, to tweak much more than font size. As far as paper-handout revi­sions go, the first one was more activ­ity than puz­zle. In fact, a few of the answers weren’t pulled directly from the puz­zle answers (“green” for the army men, “cin­na­mon” for the cook­ies) because, by the time I needed final-ish designs of the rotors, those puz­zles were still in a pro­to­type form. DeeAnn did a great job at rewrit­ing them into a more puzzle-y form. ## ClueKeeper We used ClueKeeper to han­dle all of the back-end logis­tics: val­i­dat­ing answers, dis­trib­ut­ing hints, and lead­ing teams to their next loca­tion. Although I didn’t do any work design­ing the ClueKeeper end of the puz­zle hunt, I did learn how amaz­ingly flex­i­ble it can be. We’re start­ing to work on the design of a replayable Portland hunt, reusing a cou­ple of the FestQuest loca­tions. If you’re in Portland, stay tuned. ## Conclusion I hope this gives a lit­tle more of a behind-the-scenes view of how Puzzled Pint han­dles puz­zle design. At a min­i­mum, puz­zles go through a pro­to­type, at least one playtest, and final QC before the pub­lic sees them. With events in live spaces, there’s a playtest or dry run. This tests not only the puz­zles, but the envi­ron­ment. This is also why we scout bars on Tuesday nights — to bet­ter see what the crowd might look like, to see if we run into event clashes like music or trivia nights, and so on. In fact, this iden­ti­fied the crowd­ing prob­lem with using Saturday Market as a loca­tion. If this hasn’t scared you away from writ­ing puz­zles or run­ning hunts, Puzzled Pint is always look­ing for guest authors and ClueKeeper is happy to talk to you about their author­ing tools and infra­struc­ture. And if you think you want a longer puz­zle hunt, I encour­age you to get involved with DASH as a player, a playtester, or vol­un­teer. Posted in: # Storytelling through aspect ratios I work in the tech­ni­cal video field, so when watch­ing dig­i­tal video, I often notice things. Macroblocking, smear­ing, edge ring­ing, blurred edges. They’re the type of things that most “video mug­gles” don’t notice, so I tend to let them slide. This week­end I watched Grand Budapest Hotel and chuck­led a lit­tle at the open­ing slate: As if. I thought it was a lit­tle joke. Pretty much every­thing capa­ble of dis­play­ing video these days does 16x9. I thought it to be some­thing like “in stereo, where avail­able.” I watched the whole thing, front to back, in one sit­ting, get­ting totally sucked in to the story. I don’t think it was Wes Anderson’s best, but it was still a great movie. It wasn’t until later, when I went back to review a scene that I noticed. The aspect ratio was... odd... And that I didn’t notice the first time through just shows how strongly the story pulled me in. It took a bit of fast-forwarding and rewind­ing, but I even­tu­ally con­cluded that the film uses FOUR dif­fer­ent aspect ratios — one for each time period depicted. Okay, tech­ni­cally, three (but two dif­fer­ent sizes of one ratio), and then the open­ing slate itself is a fourth or a fifth, depend­ing on what you’re count­ing. Widescreen tele­vi­sions these days are 16:9. For every 16 pix­els across, you get 9 pix­els down. That’s a ratio of 16:19, or 16 ÷ 9 which is 1.78:1. (Laptops are a funny thing and have all sorts of dis­play ratios. Mine hap­pens to be 16:10 or 1.6:1.) The open­ing slate itself is a lit­tle weird in that it’s 16:8, or 1.871:1. The movie is book­ended by pre­sum­ably modern-day shots of a girl vis­it­ing a memo­r­ial to “The Author.” These shots are the same 16:8, but they’re let­ter­boxed down to a smaller size. They’re 1.85:1, or Academy widescreen. This is one of a cou­ple of dif­fer­ent non-proprietary widescreen for­mats that cam­era mak­ers, stu­dios, and the­aters agreed upon in the 50s. The image size is the same as the “please set your mon­i­tor...” slate above, but I’ve high­lighted the let­ter­box­ing in green. We get a few quick scenes of The Author speak­ing about his past, address­ing the audi­ence, nar­rat­ing the story. These scenes end up being the same ratio, but on a smaller area of the print. He talks of vis­it­ing the hotel in the 60s. The hotel has seen bet­ter days, but the reluc­tant owner has a few sto­ries to tell. This is a full edge-to-edge aspect ratio, 2.31:1. It’s a dif­fer­ent acad­emy widescreen stan­dard that’s a bit more wide than the other. When we delve into the hotel owner con­vey­ing sto­ries of his days as a Lobby Boy in the grand hey­day of the hotel, we switch to a very odd 4:3 aspect ratio. You’d rec­og­nize it from older standard-definition TV. You might even rec­og­nize it from the orig­i­nal Edison movie equip­ment. Each era, each with a dif­fer­ent res­o­lu­tion or aspect ratio, all a sub­tle part of sto­ry­telling. And yet, oddly, the sto­ry­telling was strong enough that I missed it the first time through. Posted in: # 17 years of Netninja Today Netninja.com turns 17. It’s been a long jour­ney through sev­eral dis­tinct phases. The early his­tory is a lit­tle fuzzy in my mind. This lit­tle nar­ra­tive is my best-effort attempt at recall­ing the details, to bet­ter doc­u­ment them for the future. My first glimpse into web­pages was at DefCon 2, the hacker con­ven­tion in Las Vegas in 1994. That was after com­mu­nity col­lege, just as I was get­ting set­tled in to a four-year uni­ver­sity. I’d been using the inter­net (through school) for some time. It was all mail­ing lists, usenet groups, FTP, and gopher sites. I’d either not heard of web pages, or I had but they hadn’t reg­is­tered solidly on my radar. At DefCon, there was some talk about using the inter­net to order pizza. The archi­tec­ture was some­thing dumb and hacked together, like a web­page hooked to a fax­mo­dem. But I didn’t quite get it. The colon-slash-slash and all that. I hadn’t seen URIs before. They were a com­bi­na­tion of for­eign and famil­iar. I could see how they might be use­ful short­hand for ftp://example.com@username:password/folder/file.txt, but this “http” thing was new. I learned quickly. It wasn’t too log after that that I ran my own lit­tle web page from the tilde-home direc­tory of the school’s com­puter. It was about as cheesy and bad as you’d expect. Fast-forward three years. Between hacker talk and lit­tle plas­tic ninja toys dis­pensed from vend­ing machines, “net ninja” had become a reg­u­lar part of my vocab­u­lary. In the inter­ven­ing time, I had also learned a lot about writ­ing and host­ing web pages, includ­ing the fact that a nor­mal Joe — not directly asso­ci­ated with an edu­ca­tional facil­ity or cor­po­ra­tion — could pur­chase and use a domain name. This would have been the sum­mer lead­ing up to DefCon 4. June 1997, specif­i­cally. I grabbed the domain name and put up a web­site. I don’t have any imme­di­ately avail­able archives of that site. You can rest assured that it had a black back­ground and hor­i­zon­tal rules ani­mated with drip­ping blood and torches. Since the begin­ning, I had a robots.txt file block­ing it from being indexed by search engines (and archive.org’s way­back machine, for that mat­ter). My line of think­ing was that you had to know about the site through some­one or some other site. You couldn’t just dis­cover it on Yahoo or AltaVista. Unfortunately, that also blocked it out from being archived. I think I may still have a cvs repos­i­tory some­where around here with the early his­tory, but it’s likely on a CD that is slowly dete­ri­o­rat­ing. Although I care enough about Netninja’s anniver­sary to write this post, I don’t care enough to find and dig through an old source con­trol sys­tem. The ear­li­est cap­ture on archive.org was from 1998, a year and a half after start. This was the sec­ond or third major revi­sion of the site. 1998: black was the new black and webrings ruled the inter­net. The site was basi­cally a col­lec­tion of sta­tic pages. I think there might have been a lit­tle bit of PHP glue to main­tain a nav­i­ga­tion, but it was mostly hard-coded web con­tent. The major­ity (or pos­si­bly entirety?) of the site was devoted to hack­ing and friends. You’ll note that “Brian Enigma” and “The Ninja” are two sep­a­rate entries. It’s been clear in my mind from the start that they are two sep­a­rate enti­ties.  I am not “a” or “the” Netninja. Netninja is either a gum­ball machine toy or a mys­te­ri­ous entity, depend­ing on con­text. And yet, the major­ity of email I received usu­ally equated the two. By 2000, I’d upgraded from Notepad.exe to vi. And hand-tested every­thing in Netscape and Lynx. Screw Internet Explorer. <hipster>You’ll note that I was steam­punk before steam­punk was cool.</hipster> I even 3D mod­eled and ani­mated the glow­ing but­tons myself. And yet, I hadn’t yet learned that webrings were lame. By 2002, I’d writ­ten an honest-to-goodness blog­ging engine. We have the same bad steam­punk stylings, but all the pages switched from html to php, with a com­mon nav­i­ga­tion. We go through a cou­ple iter­a­tions of the same, 2004 and 2005, improv­ing nav­i­ga­tion and adding a side­bar. The biggest change (in infra­struc­ture and con­tent) was about 2007. I moved from my cus­tom site code to run­ning an instance of WordPress. This meant I could import a bunch of LiveJournal con­tent (of ques­tion­able qual­ity) going back to 2001. Typical LJ con­tent was what I did that day, navel-gazing intro­spec­tion, and angsty posts about music, movies, and tele­vi­sion. Existing hacker-related con­tent from the “old” site had to be man­u­ally migrated. The impor­tant stuff did, but I left a lot out. The new site for­mat also meant it was much eas­ier to post new con­tent. Adding a page to my cus­tom site frame­work usu­ally involved writ­ing HTML files that the tem­plate would pick up, pos­si­bly mak­ing changes to the tem­plate itself, and sync­ing the whole thing from my desk­top to the web­site. With WordPress, of course, you just open the edi­tor page and start typ­ing. Easy. By 2009, my pri­mary blog­ging engine was WordPress run­ning on Netninja. Everything got auto­mat­i­cally cross-posted to LiveJournal and a lot of the com­ment dis­cus­sion lived there, but the con­tent was mine. It lived here. I also did some tem­plate cus­tomiza­tion (though the css is a bit glitchy in the archive.org cap­ture) to make my site feel a bit more like LiveJournal, with cus­tom per-post user icons (such as the bar­code flag here). We next go through a few more wood-grain iter­a­tions while the con­tent sub­tly shifts from per­sonal diary to things other peo­ple might actu­ally care about. That lat­ter ver­sions add a bit of parch­ment paper. By 2011, Netninja.com is much more focused on tech and Portland and less about my daily life. We’re also in the graphic design phase I lov­ingly call “I can’t read this thin serif text on that Apple-inspired linen back­ground.” We then, of course, have today’s Netninja. And that ret­ro­spec­tive con­cludes our digres­sion back to navel-gazing. Posted in: # On accurately measuring 2oz I picked up some lab­o­ra­tory glass­ware, specif­i­cally for kitchen use, the other week. It’s all borosil­i­cate, so it’s effec­tively the same as Pyrex. Some of it will get used for drink­ing glasses but one item I picked up was a grad­u­ated cylin­der. My plan is to use it for mixed drinks. With a cock­tail jig­ger, it is easy to mea­sure out 1 or 2 ounces. But with a strong ingre­di­ent like cel­ery juice, 1oz may already be too much. Accurately exper­i­ment­ing with small vari­ances in vol­ume felt like the right way to go. I did some math and found that 2oz ≈ 59mL. A 100mL cylin­der should work well. I ordered, it showed up, and I started to doubt my math. Or my abil­ity to order the cor­rect cylin­der. That doesn’t look quite right. I double-checked the mark­ings. Yep, it’s the right size cylin­der. In this pic­ture, the cylin­der is filled with col­ored liq­uid for bet­ter con­trast and there is a rub­ber band at the 59mL mark. Next to the cylin­der is my mea­sur­ing jig­ger: 2oz up top, 1oz down below. Even tak­ing into account that the jig­ger is a misleading-to-estimate cone whose open end is a lit­tle wider than the cylin­der, it feels like 59mL is way too much. Like maybe 30 would be more accu­rate. I double-checked my math, asked Google and Wolfram Alpha, and it all worked out. 2oz = 59.15mL. Of course, I then went the other direc­tion: I filled the jig­ger and dumped it into the cylin­der. That’s when things became a bit more clear. 2oz as 60 mil­li­liters feels more accu­rate, but now I’m left to dis­cover that the unmarked jig­ger that I pre­vi­ously assumed was 2oz was really 50mL or 1.7oz. I’ve been mak­ing weak drinks this whole time! (Well, admit­tedly, I often pur­posely make weaker-than-bar drinks because I often find myself look­ing for liq­uid refresh­ment more than I’m look­ing to get toasted.) And, oddly enough, the 1oz jig­ger clocks in at roughly 31.5mL, or 1.1oz. Posted in: # Designing a fair 3-sided coin A cou­ple of week­ends ago I played Betrayal at House on the Hill with a few friends. One of the stranger fea­tures of the game are the dice. They’re 6-sided die that act as 3-sided. They have pips for 0, 1, and 2 — two faces with each value. During the game, Jonathan brought up the fact that Von Neumann worked out a fair 3-sided “coin” of a dif­fer­ent geom­e­try that could work the same for this game. Instead of the coin being a cylin­der so short you don’t think of it as a cylin­der any­more, it’s extruded to have a sig­nif­i­cant height. There’s a num­ber on each end (the heads and tails of a tra­di­tional coin) plus a third option on the now-not-insignificant edge. In my research, I have also run across a design that looks a lit­tle more like a Toblerone: a tri­an­gu­lar prism that’s either so long you couldn’t land it on either end (which seems too long for a sat­is­fy­ing roll) or one that is a lot shorter, but whose ends taper to a point. I am going to skip this vari­ant dur­ing this dis­cus­sion. It’s triv­ial to see how to make this one fair: start with an equi­lat­eral tri­an­gle and extrude it to a prism. I have inten­tion­ally only skimmed the papers that talk in detail about the design of this 3-sided coin and how to arrive at the cor­rect ratio of diam­e­ter to height. I thought I might work it out for myself before run­ning off to view the spoil­ers. ### The tall extreme (height) If feels like the tallest you could pos­si­bly go would be to match the diam­e­ter of the cir­cle to its height. You effec­tively take a 6-sided die and lathe it down to a cylin­der. $h = 2r$ But this intu­itively feels wrong. On the uni­form cube of a 6-sided die, you have one face at each end, but around the out­side (the edge of the cylin­der in this case) are four faces. It feels like you’d have a much higher prob­a­bil­ity of hit­ting the edge. Looking at a 3D model, it even looks dif­fi­cult  to hit either end: ### The short extreme (sur­face area) On a stan­dard 6-sided die each face has the same sur­face area, so maybe that’s the best way to go? If we try to match the sur­face area of one end to the sur­face area of the edge, we get to use a bit more math: $\text{end surface area} = \text{edge surface area} \\ \pi r^2 = 2\pi r h \\ \frac{\pi r^2}{r} = \frac{2 \pi r h}{r} \\ \pi r = 2 \pi h \\ \frac{\pi r}{\pi} = \frac{2 \pi h}{\pi} \\ r = 2 h \\ h = \frac{r}{2}$ That makes the height half the radius. That’s a quar­ter of the diam­e­ter. Intuitively, that feels pretty darn short. In fact, it feels like the short­est pos­si­ble extreme. In a 3D ren­der, the edges look impos­si­ble to hit, like more of a 2-sided coin than a 3-sided one: ### Conclusion (so far) There is some middle-ground between the two extremes that I’m miss­ing. So far, I have been unable to come up with a good hypoth­e­sis that fits between the extremes. If I don’t come up with a good hypoth­e­sis in a week or so, I’ll “cheat” and dig into the papers for the cor­rect solu­tion (and hope it doesn’t involve any of the cal­cu­lus I’ve for­got­ten since col­lege). Posted in: # And introducing: Cornelius I real­ized that I’ve men­tioned this on Instagram and to a few friends, but have not yet made any ref­er­ence on this blog: I have a new kit­ten! His name is Cornelis! He was born on February 21st at Enchanted Sphynx. I vis­ited him and his brother about a month ago. And he came home two week­ends ago. It was an inter­est­ing road trip: We’re still get­ting fully set­tled in. He absolutely adores Norman, the indoor/outdoor tuxedo that showed up semi-feral on our doorstep. He shad­ows Norman around and tries to play with his tail. For the most part, Norman is indif­fer­ent (except for the tail-playing, which he dis­likes). The Precious is a dif­fer­ent story. She hates change and doesn’t like him. She’s warmed up to him slightly over the past cou­ple of weeks, but is still pretty frigid. It took her a few months to get used to Ebenezer, but they even­tu­ally became best bud­dies. Pictures! Posted in: # Building light-up R2D2 mouse ears (May the Fourth) I basi­cally grew up at Disneyland. As a kid, my fam­ily lived about 20 min­utes away and we’d get to go twice a year: once on my birth­day and once on my sister’s.  In my 20s, I had an annual pass and would go on the week­ends, or just pop by for an hour after work to have din­ner, peo­ple­watch, and maybe catch a ride or two. My last 10 years have been in the pacific north­west, with only two or three Disneyland trips. But I had a ran­dom thought last year. After play­ing with the Adafruit Neopixel strips on other projects, I thought I might attach them to some Disneyland mouse ears. I might even have the guts to wear those ears to the park next time I vis­ited. I started with some mouse ears. I wasn’t totally cer­tain that offi­cial Disneyland mouse ears would be avail­able out­side of the park, but Google brought me to The Mouse Shoppe. Furthermore, I had no idea that there was such ear-hat vari­ety. Given that I was going to drop some elec­tron­ics into and onto the hat, I opted for R2D2. The elec­tron­ics pack­age I came up with is nearly iden­ti­cal to Adafruit’s “Cyberpunk Spikes,” only wrapped around the ears rather than under some sil­i­cone spikes. My bill of mate­ri­als looks a bit like this: The elec­tron­ics part of the build was fairly easy — the same as the Cuberpunk Spikes project, except with a gap between two LED strips. Getting every­thing attached to the hat in a way that looks decent was, for me, the chal­lenge. • Wire the LEDs through the hat with enough slack wire to make them work­able. • Using tem­po­rary sol­der joints, ver­ify the LED’s wiring against the (unmounted) micro­con­troller and bat­tery. Unsolder the tem­po­rary joints. • Use small zip­ties to take in the slack between the LED strips. • Create a slip-out bat­tery har­ness using elas­tic. • Trim and sol­der the LED wires to the micro­con­troller. • Sew in the micro­con­troller. • Glue down the LEDs. • Sew down any other assorted loose wires. The ear hat is made of a strange sort of rub­ber­ized fab­ric. Using a hobby knife, I made inci­sions, enough for three wires (power, ground, data) to fit through. A strip of 16 LEDs fit per­fectly around the cir­cum­fer­ence of an ear so I sol­dered leads to either end. I didn’t want them too short, as that would mean a lot of rework, and fig­ured I could eas­ily trim them later, so they were longer than nec­es­sary. Label the wires! This is impor­tant. I used black wire and wrapped the ends of the LED strips in elec­tri­cal tape to bet­ter con­ceal and pro­tect the sol­der joints. Without labels, it is easy to mix up your lines. Pass the nice long wires through an inci­sion up near the next ear and care­fully wire them to the next strip of 16 LEDs. Once the two sets of LEDs were wired through the inci­sions, sol­dered in series, passed my ini­tial test­ing (but weren’t yet glued down), I pulled in some of the slack between the two strips and bun­dled it with a zip­tie. I then held the bat­tery where the elas­tic har­ness would go: at the inside top cen­ter of the hat. The bat­tery is the heav­i­est piece, so I didn’t want it throw­ing off the hat’s cen­ter of grav­ity. I used per­ma­nent marker to mark its spot, then sewed in elas­tic strips. Mark the battery’s loca­tion. Mark the battery’s loca­tion. Elastic sewn in. Blue thread and strate­gic posi­tion­ing helps the sewing blend in. I then sewed down (gray thread this time) the leads between the first LED strip and the micro­con­troller, as well as the micro­con­troller itself. There was a spot on the out­side graph­ics that looked per­fect for the board. The next step was to glue down the LED strips. I used clear sil­i­cone sealant then tem­porar­ily held every­thing in place with binder clips. Finally, I sewed down the excess slack wire This design has no power switch. You just snake the power exten­sion wire from the bat­tery to the micro­con­troller and connect/disconnect as needed. Programming is easy. The USB port is exposed. Plug in and upload. I started with the sam­ple code for the Cyberpunk Spikes project, but tweaked it sig­nif­i­cantly to get those inward-spinning rain­bow wheels. Posted in: # A close shave I started shav­ing, way back in the day, with an elec­tric razor, then switched to dis­pos­ables in my mid twen­ties. Four years ago, I switched to a safety razor. (See: My per­sonal devo­lu­tion in shav­ing) It was the pop­u­lar hip­ster thing to do at the time. The razor was, maybe \$40, but the blades were pen­nies. Using shav­ing soap with a brush is far supe­rior to foam from a can. The razor itself? The shave wasn’t as good at first, but I thought I would get more skilled at it over time. I did get bet­ter, but not really enough to match the multi-blade dis­pos­able. Cheeks and neck, no prob­lem. Chin and under the nose, it either didn’t get close enough or would get too close (blood!). A few weeks ago, I picked up a Quattro razor as an exper­i­ment — my razor of choice back when I used dis­pos­ables. I find that the mul­ti­ple flex­i­ble blades give a remark­ably close shave, even in the prob­lem areas. But because I shave only a cou­ple of times a week and because my scruff doesn’t grow long, but thick, it clogs like crazy. Based on this exper­i­ment, I think I have a new rou­tine: a first rough pass with the Merkur safety razor to get the major­ity of the scruff fol­lowed by a sec­ond detail pass with the Quattro. I like the old-timey and money-saving aspect of the safety razor, but I think I’ve finally learned that it just doesn’t *ahem* cut it for me as my one-and-only razor. Posted in: # 3D Printed Medieval Barbie Armor I don’t have kids, don’t want kids, but I do have a cer­tain fas­ci­na­tion with toys. It prob­a­bly stems back to the ’80s when the toy sup­ple­ment to the Sears cat­a­log arrived each year, con­ve­niently timed to show up a few months before Christmas. I would comb through every page of that thing — well, every page that was not for girls or babies — think­ing and dream­ing and mem­o­riz­ing all the specs. All the kids looked so happy play­ing with their new Transformers and the Star Wars fig­ures and vehi­cles. I wanted all of them. I ear­marked the pages con­tain­ing the taller-than-a-kid Space Warp mar­ble track, and espe­cially the Omnibot 2000. (Neither of which I ever received.) But my favorite toys were always related to build­ing and learn­ing. As an adult I do not buy very many toys (and by exten­sion, gad­gets), but I always have to stop at the sci­ence museum gift shop. I keep up to date on the lat­est giz­mos. It is no coin­ci­dence that most stuff com­ing out of my 3D printer tends to be toys, trin­kets, puz­zles, and geo­met­ric odd­i­ties. Certain toys jump out at me more than oth­ers. Since becom­ing aware of STE[A]M (Science, Technology, Engineering, [Art,] and Math), I now pay spe­cial atten­tion to toys that try to break girls out of the pink aisle at Toys-Я-Us. Last year that was Goldie Blox, which com­bines sto­ry­telling with engi­neer­ing. Just this week it is Faire Play: Barbie-Compatible 3D Printed Medieval Armor from an inter­net friend, Zheng3, on Kickstarter. I have men­tioned him a few other times on this blog. Notably, he devel­oped Seej, the 3D print­able table­top bat­tle game that reminds me of another favorite 80s game, Crossbows and Catapults. My own small con­tri­bu­tion to the game was a mod­u­lar penny cat­a­pult I designed two years ago. Zheng3’s Kickstarter project is to design 3D print­able medieval armor for Barbie dolls. I see this as being a great tran­si­tion to help girls ease their doll play from “let’s go shop­ping,” “let’s cook din­ner,” or even “some day Prince Charming will come” to a much more active and kick-ass “let’s fight that nasty dragon and save the vil­lage.” In addi­tion to the straight-up cos­tume play, I would hope the 3D-printed aspect might be an extra inspi­ra­tion. Maybe the kids have direct access to a 3D printer — be it a parent’s, at a friend’s house, or the Cube printer at the local Office Depot — and see the armor being printed, hope­fully lead­ing to curios­ity into how the 3D printer works. Or maybe they’re inspired to see they can mod­ify toys in their own cus­tom ways, whether it is by invent­ing their own 3D mod­els or more low-tech, like mold­ing in Fimo clay. He’s already devel­oped and released The Athena Makeover Kit (pic­tured on the right), which includes spear, shield, and winged boots. The thing I find kind of inter­est­ing is how strangely mis­shapen and bloated the boots look. Go ahead — click through to the Thingiverse page and select the boot model thumb­nail image. Careful eyes will see that Barbie’s feet are pre-molded for high-heel shoes. The out­stretched foot is not very com­pat­i­ble with boots, so the boots had to adapt, had to become more bulky at the ankle. So whether you have kids who play with Barbies or whether you just like the spirit of the project, I’d encour­age you to con­tribute a few dol­lars to his kick­starter: http://kck.st/Ol1Bid Posted in: For years, I’ve been search­ing for a good free-form sym­bolic cal­cu­la­tor pro­gram that works across mul­ti­ple desk­top oper­at­ing sys­tems. I think I’ve finally found one worth men­tion­ing. My goals: • Be able to enter expres­sions sim­i­lar to what I could do on a TI-85, back in the day, for exam­ple: 2^2+(2*10) • Be able to eas­ily edit and copy previously-entered expres­sions. • Input/output hex. I work in hex a lot. This includes expres­sions (0x48+0x16) as well as base con­ver­sion (0xC3 as decimal or 0b110101 as decimal) and bit­wise math (0xFC AND 0x7F). • Lightweight. Quick to load, quick to cal­cu­late. Get in, get out. Or leave it run­ning in the back­ground with­out eat­ing a ton of resources. I don’t need or want Mathematica or Maxima. • It needs to min­i­mally run on Windows and Mac. Ideally a Linux ver­sion would be avail­able, too. I write code on Linux (work) and Mac (home), but my office Windows box ends up being my doc­u­men­ta­tion ref­er­ence, scratch­pad, cal­cu­la­tor, and every­thing else non-coding because I typ­i­cally run my Linux IDE full-screen (bridged with Synergy, nat­u­rally). I’d pre­vi­ously got­ten hooked on Soulver. It’s great on the Mac, but there are not Windows or Linux ports. There is an iOS port, but I can’t stand the data entry. SpeedCrunch is avail­able for all plat­forms, but like many Open Source pro­grams, the oper­a­tion and user inter­face is clunky. A few months back, I found Calca, “the text edi­tor that loves math,” for Windows and Mac. It lit­er­ally is a text edi­tor. The trick is that it looks for “=” and inter­prets these as def­i­n­i­tion state­ments and it looks for “=>”, and treats these lines as prob­lems to solve. Everything from “=>” to the end of the line is rewrit­ten to become a read-only answer. For exam­ple: I don’t come close to using all the fea­tures in Calca: func­tions, unit/currency con­ver­sion, matrix math, deriv­a­tives, and so on. My needs are small, but with the pieces I do use, it per­forms extremely well. A few things I don’t like about Calca: • There are no bit­wise shifts or inver­sions. I some­times run into cases where a 32-bit inte­ger is com­posed of sev­eral unaligned bit fields. For instance, bits 5..7 might be one field. It would be great to say: 0x1234 >> 5 & 0b111 • I fre­quently get con­fused with base con­ver­sion syn­tax. Is it “as dec” or “in dec”? I fre­quently pick the wrong one. • Having to type “=>” at the end of each line is typo­graph­i­cally awk­ward. I appre­ci­ate Soulver hav­ing a sec­ond col­umn that auto-updates as you type. • It would be nice to have a “pre­vi­ous answer” sym­bol. The TI cal­cu­la­tors auto­mat­i­cally insert an “Ans” vari­able (a place­holder for the pre­vi­ous line’s answer) if you start a new line with an oper­a­tor instead of an operand. For me, it was worth buy­ing both a Windows and Mac license. I use it all the time. Posted in:
{}
## 2A.24 Ami_Pant_4G Posts: 106 Joined: Sat Aug 24, 2019 12:17 am ### 2A.24 On the basis of the expected charges on the monatomic ions, give the chemical formula of each of the following compounds: (a) magnesium arsenide; (b) indium(III) sulfide; (c) aluminum hydride; (d) hydrogen telluride; (e) bismuth(III) fluoride. Can someone please explain how to find a and b? Thanks in advance. Posts: 51 Joined: Sat Aug 24, 2019 12:16 am ### Re: 2A.24 a) Mg's preferred oxidation state is +2 while As's preferred oxidation state is -3. To make the molecule neutral, the formula is Mg3As2. b) The (III) after In indicates it has a +3 oxidation state. S's preferred oxidation state is -2. Thus, to make the molecule neutral, the formula is In2S3. Myka G 1l Posts: 100 Joined: Fri Aug 30, 2019 12:17 am ### Re: 2A.24 You just have to use the expected charges of the individual ions to figure out the chemical formula. For a the charge of Mg is 2+ and As is 3- so the chemical formula would be Mg3As2 because you want the charges to cancel out. For b the charge of In is 3+ as indicated by the Roman numerals and the charge of S is 2- so the chemical formula would be In2S3. Vuong_2F Posts: 90 Joined: Sat Sep 14, 2019 12:17 am ### Re: 2A.24 a) magnesium has a charge of +2 and arsenic has a charge of -3. Therefore, you will need 3 of Mg and 2 of As to balance out the charges, which gives you the formula $Mg_{3}{As_{2}}$ b) The charge of indium is given as (III), or +3, and sulfur has a charge of -2. Therefore, you will have the formula $In_{2}{S_{3}}$
{}
Hypothesis testing Definition: A hypothesis is a statement about a parameter Goal: Decide which of 2 complementatry hypothesis is true based on data. • $H_O$: null hypothesis • $H_A$: alternate hypothesis This setting originates from the experimental design. Definition: A hypothesis test is a rule that specifies, (often) based on the value of a test statistics $W(x_1,x_n)$. a. when to accept $H_O$ as true b. when to reject $H_O$ and accpect $H_A$ Ex. $x_1,\dots,x_n \overset{\text{iid}}\sim N(\mu,\sigma^2)$ where $\mu$ is unknown a. $H_O:\mu = 0$ vs $H_A:\mu \neq 0$ is an example of 2 complementary hypothesis. Here $\bar x$ gives information whihc hypothesis is true, so $\bar x$ would be a good test statistics. Our hypothesis test is to reject $H_O$ if $\bar x > c$ or $\bar x < -c$, for some constant $c >0$. b. $H_O: \mu \geq 0\ vs\ H_A: \mu <0$. If we use $\bar x$ as test statistics, we would reject $H_O\ if\ \bar x<-c\ for\ some\ c>0$. Definition: The likelihood ratio (LR) test statistics for testing $H_O: \theta \in \Omega_0$ vs $H_A: \theta \in \Omega_1$, where $\Omega=\Omega_0 \cup \Omega_1$ is the parameter space of $\theta$ and $\Omega_0 \cap \Phi_1 = \Phi$ $$\lambda(x_1,\dots,x_n)=\frac{\max \limits_{\theta \in \Omega_0}L(\theta|x_1,\dots,x_n)}{\max \limits_{\theta \in \Omega}L(\theta|x_1,\dots,x_n)}$$ Note: $0\leq\lambda\leq 1$, $\lambda \approx 1$ if true value of $\theta \in \Omega_0$(ie: $H_0$ is true); $\lambda \approx 0$ if $H_0$ is false (since values $\theta \in \Omega_0$ don’t fit the data well). So accept $H_0$ if $\lambda$ is close to 1, accept $H_A$ if $\lambda$ if $\lambda$ close to $0$. Definition: Let $w$ be a test statitiscs, suppose the hypothesis test is the rule “reject $H_o$ if $w\in \text{R}$”, then $\text{R}$ is called the rejection region. Ex. The rejection region for a LR test is $\lbrace \lambda \leq c \rbrace$ from some $0<c<1$. Ex. $x_1,\dots,x_n \overset{\text{iid}}\sim N(\mu,\sigma^2)$ with $\sigma^2$ known, $\mu$ is unknown parameter. $$H_O: \mu=\mu_0, H_A: \mu\neq \mu_0$$ Derive LR test statistics $\lambda$. $$\begin{split} L(\mu|x_1,\dots,x_n)&=\prod \limits_{i=1}^n \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x_i-\mu)^2}{2\sigma^2}} \\ &=\frac{1}{(2\pi)^{n/2}\sigma^{n}}e^{-\frac{\sum(x_i-\mu)^2}{2\sigma^2}} \end{split}$$ $$\begin{split} \lambda(x_1,\dots,x_n)&=\frac{\max \limits_{\theta \in \Omega_0}L(\theta|x_1,\dots,x_n)}{\max \limits_{\theta \in \Omega}L(\theta|x_1,\dots,x_n)}\\ &=\frac{\max \limits_{\mu=\mu_0}L(\theta|x_1,\dots,x_n)}{\max \limits_{\mu\in \text{R}}L(\theta|x_1,\dots,x_n)} \\ & = \frac{\frac{1}{(2\pi)^{n/2}\sigma^{n}}e^{-\frac{\sum(x_i-\mu_0)^2}{2\sigma^2}}}{\frac{1}{(2\pi)^{n/2}\sigma^{n}}e^{-\frac{\sum(x_i-\bar x)^2}{2\sigma^2}}} \\ & = \exp(-\frac{\sum(x_i-\mu_i)^2-\sum(x_i-\bar x)^2}{2\sigma^2}) \\ & = \exp(-\frac{n}{2\sigma^2}(\bar x-\mu_0)^2) \end{split}$$ It should notice that $$\begin{split}\sum(x_i - \mu_0)^2 &= \sum(x_i - \bar x+\bar x-\mu_0)^2 \\ & = \sum(x_i-\bar x)^2 + 2\sum (x_i-\bar x)(\bar x - \mu_0)+\sum(\bar x-\mu_0)^2 \\ &= \sum(x_i - \bar x)^2 + n(\bar x - \mu_0)^2 \end{split}$$ So the LR test rejects $H_O$ if $$\begin{split} &e^{-\frac{n}{2\sigma^2}(\bar x-\mu_0)^2} \leq C \\ &\Rightarrow -\frac{n}{2\sigma^2}(\bar x-\mu_0)^2 \leq logC \\ &\Rightarrow (\bar x - \mu_0)^2 \geq -\frac{2\sigma^2}{n}logC \\ &\Rightarrow \bar x -\mu_0 \geq \sqrt{-\frac{2\sigma^2}{n}logC} \\ & or \ \ \bar x -\mu_0 \leq \sqrt{-\frac{2\sigma^2}{n}logC} \\ &\Rightarrow \bar x \geq \mu_0+\sqrt{-\frac{2\sigma^2}{n}logC} \\ & or \ \ \bar x \leq \mu_0-\sqrt{-\frac{2\sigma^2}{n}logC} \end{split}$$ Based on LR test, we reject $H_O: \mu=\mu_0$ if $\bar x$ is quite different than $\mu_0$ (either larger or smaller) Definition: $w$ is test statistics, $H_O: \theta \in \Omega_0$. A hypothesis test has level $\alpha$ if $\max \limits_{\theta\in\Omega_0} p(w\in \text{R}|\theta)=\alpha$, where $\text{R}$ the rejection region. i.e. $\alpha=probability$ of rejecting $H_O$ when $H_O$ is true. $\alpha$ is also called type-I error rate. Continue our example above: Let $\mathbf{C}=-\sqrt{-\frac{2\sigma^2}{n}logC}$. What value of $\mathbf{C}$ achieves a level $\alpha$ test? Ans: reject $H_O$ if $\bar x\leq \mu_0+\mathbf{C}$ or $\bar x \leq \mu_0-\mathbf{C}$ If $H_O$ is true, the $\mu=\mu_0$ and so $\bar x\sim N(\mu, \frac{\sigma^2}{n})\Rightarrow \frac{\bar x-\mu_0}{\sigma/\sqrt{n}}\sim N(0,1)$ under $H_O$ $$\begin{split} \alpha &=\max \limits_{\mu = \mu_0}p(\bar x\geq \mu_0+\mathbf{C}\ or\ \bar x\leq \mu_0-\mathbf{C}) \\ &=p(\frac{\bar x-\mu_0}{\sigma/\sqrt{n}} \geq \frac{\mathbf{C}}{\sigma/\sqrt{n}})+p(\frac{\bar x-\mu_0}{\sigma/\sqrt{n}} \leq \frac{-\mathbf{C}}{\sigma/\sqrt{n}}) \end{split}$$where $\bar x \sim N(\mu,\frac{\sigma^2}{n})$ $\Rightarrow \frac{\mathbf{C}}{\sigma/\sqrt{n}}$ is the $1-\frac{\alpha}{2}$ quantile pf $N(0,1)$. So the final rule to achieve $\alpha$ level test is reject $H_O$ if $\frac{\bar x-\mu_0}{\sigma/\sqrt{n}}\geq Z_{1-\frac{\alpha}{2}}$ or $\frac{\bar x-\mu_0}{\sigma/\sqrt{n}}\leq -Z_{1-\frac{\alpha}{2}}$ $\Rightarrow |\frac{\bar x-\mu_0}{\sigma/\sqrt{n}}|\geq Z_{1-\frac{\alpha}{2}}$,this is the one-sample z-test for a mean. If $x_1,\dots,x_n \overset{\text{iid}}\sim N(\mu,\sigma^2)$ with both $\mu, \sigma^2$ unknown, the LR test for $H_O:\mu=\mu_0$ vs $H_A:\mu\neq \mu_0$ yields one-sample test. HINT: in numerator of LR, we need $\max \limits_{\mu=\mu_0,\sigma^2>0}L(\mu,\sigma^2|x_1,\dots,x_n)$ i.e. substitute $\mu=\mu_0$, maximize w.r.t $\sigma^2$ alone. Often, we would not know the exact distribution of LR test statistics $\lambda_0$. The, we can use the large sample approximation. Theory: $H_O: \theta \in \Omega_0, H_A:\theta \in \Omega_1, \Omega = \Omega_0 \cup \Omega_1$. $\lambda=$ LR test statistics based on $x_1,\dots,x_n$. When n is large: if $H_O$ is true, then $$-2log\lambda\overset{\text{approx}}\sim \chi_p^2$$ for any $\theta \in \Omega_0$. $p = dim(\Omega)-dim(\Omega_0)$, i.e. difference in # free parameters. LR test rejects $H_O$ if $\lambda \leq C$, then $-2log\lambda \geq \underbrace{-2logC}_{\mathbf{C}}$ and $\alpha = p(-2log\lambda \geq \mathbf{C})$ gives the approx level $\alpha$ test. So the approx level $\alpha$ of LR test is reject $H_O$ if $-2log\lambda\geq (1-\alpha)$ quantile of $\chi_p^2$. If you like my article, please feel free to donate!
{}
2. Accounting 3. unfortunately this is the third time im reposting this because... # Question: unfortunately this is the third time im reposting this because... ###### Question details Unfortunately this is the third time I'm reposting this because of an insufficient answer. I need specific excel formulas for each answer. For example: PV(.085,5000,8,1000,0) or specific cell reference: =b4*b2 per each cell answer requirement. So specific formulas for each cell that requires an answer as well as the answer. So far I've only received formulas for the "Bond price Quotation" cells.  Thank you....
{}
Browse Questions The number of moles of barium carbonate which contains $1.5$ moles of $O_2$ atoms is .................. . Can you answer this question?
{}
# What does Acnode mean? #### Acnode meaning in General Dictionary An isolated point maybe not upon a curve but whose cooumlrdinates satisfy the equation regarding the bend such that it is recognized as of the bend View more • An isolated point not upon a curve, but whoever coordinatesu000du000a match the equation of this bend such that it is generally accepted as that belongu000du000a to the curve. #### Acnode meaning in General Dictionary (n.) An isolated point perhaps not upon a curve, but whoever coordinates satisfy the equation for the curve such that it is recognized as of the bend. #### Sentence Examples with the word Acnode The singular kinds arise as before; in the crunodal and the cuspidal kinds the whole curve is an odd circuit, but in an acnodal kind the acnode must be regarded as an even circuit.
{}
# A group of students decided to collect as many paise from each members of group as is the number of members. If the total collection amount to $Rs. 59.29$ the number of the member is the group is $\begin{array}{1 1} 57 \\ 67 \\ 77 \\ 87 \end{array}$
{}
# Orthogonal Distance Regression Plane for a given PointCloud -- Am I Doing This Correctly? [Note: At karthik's suggestion, I have also posted this to the PCL-user's list.] I've written the following method to compute the orthogonal distance regression plane for a given PointCloud, using the method detailed here. However, visualizing it in rviz has proven to be a bit tricky, probably mostly due to my inexperience with quaternions. What I've got below seems to mostly work, but sometimes the plane (represented by a set of PoseStamped axes oriented along the plane's normal vector -- call this vector n) displayed by rviz seems in some cases to be a bit more off than I would expect, and so I'm still unsure if I'm doing everything properly. The two major places I'm uncertain about are in my implementation of the fitting algorithm from the above link (eg. am I properly using the Eigen::JacobiSVD and related matrices to calculate the correct things?), and in my calculation of the orientation quaternion. Here's the idea behind the calculation of the quaternion the way I have it. As from Wikipedia, the quaternion q is defined in relation to rotations by: q = cos(a/2) + u * sin(a/2) Where a is the angle to rotate by, and u is the unit vector about which to rotate. So I assumed the 'starting' vector would be (0,0,1), which should be rotated by q until it is pointing in the same direction as the plane normal. Thus, the rotation axis u is the cross product of (0,0,1) and n, and a is the angle between them, computed as a = arcsine((0,0,1) DOT n). Thank you! void Converter::publishLSPlane(pcl::PointCloud<pcl::PointXYZ> points, std_msgs::Header header) { if (points.size() >= 3) { Eigen::MatrixXd m(points.size(), 3); // Compute centroid double centroid_x = 0; double centroid_y = 0; double centroid_z = 0; BOOST_FOREACH(const pcl::PointXYZ p, points) { centroid_x += p.x; centroid_y += p.y; centroid_z += p.z; } centroid_x /= points.size(); centroid_y /= points.size(); centroid_z /= points.size(); // Define m int i=0; BOOST_FOREACH(const pcl::PointXYZ p, points) { m(i,0) = p.x - centroid_x; m(i,1) = p.y - centroid_y; m(i,2) = p.z - centroid_z; i++; } // Compute SVD Eigen::JacobiSVD<Eigen::MatrixXd> svd(m, Eigen::ComputeThinV); const int last = svd.cols() - 1; double normal_x = svd.matrixV()(0,last); double normal_y = svd.matrixV()(1,last); double normal_z = svd.matrixV()(2,last); ROS_INFO("Centroid: [%f, %f, %f]", centroid_x, centroid_y, centroid_z); ROS_INFO("Normal: [%f, %f, %f]\n", normal_x, normal_y, normal_z); // Publish the normal for display in rviz geometry_msgs::PoseStamped normal; normal.pose.position.x = centroid_x; normal.pose.position.y = centroid_y; normal.pose.position.z = centroid_z; // Yes I realize some of the math here could probably be simplified using // trig identities, but for now I just want it to work. ;) normal.pose.orientation.w = std::cos(0.5 * std::asin(normal_z)); normal.pose.orientation.x = -normal_y * std::sin(0.5 * std::asin(normal_z)); normal.pose.orientation ... edit retag close merge delete I think its better to post this question in http://www.pcl-users.org/ ( 2012-01-12 07:00:59 -0600 )edit That's a very good point. XD Thanks, I'll do that. :) ( 2012-01-12 11:50:50 -0600 )edit I'm posting it there as well, however half the question is about the correct usage of the quaternions in PoseStamped ROS messages with rviz, so it's actually still valid here too. :) ( 2012-01-12 12:14:01 -0600 )edit Sort by » oldest newest most voted I know this does not answer your question, but there is a nodelet in pcl_ros that can segment planes: http://www.ros.org/wiki/pcl_ros/Tutorials/SACSegmentationFromNormals%20planar%20segmentation more
{}
F06 Chapter Contents F06 Chapter Introduction NAG Library Manual # NAG Library Routine DocumentF06BPF Note:  before using this routine, please read the Users' Note for your implementation to check the interpretation of bold italicised terms and other implementation-dependent details. ## 1  Purpose F06BPF returns an eigenvalue of a $2$ by $2$ real symmetric matrix. ## 2  Specification FUNCTION F06BPF ( A, B, C) REAL (KIND=nag_wp) F06BPF REAL (KIND=nag_wp) A, B, C ## 3  Description F06BPF returns an eigenvalue of the $2$ by $2$ real symmetric matrix $a b b c ,$ via the function name. The result is intended for use as a shift in symmetric eigenvalue routines. The eigenvalue is computed as $c - b f + sign⁡f × 1+f2 ,$ where $f=\frac{a-c}{2b}$. This is the eigenvalue nearer to $c$ if $a\ne c$, and is equal to $c-b$ if $a=c$. None. ## 5  Parameters 1:     A – REAL (KIND=nag_wp)Input On entry: the value $a$, the $\left(1,1\right)$ element of the input matrix. 2:     B – REAL (KIND=nag_wp)Input On entry: the value $b$, the $\left(1,2\right)$ or $\left(2,1\right)$ element of the input matrix. 3:     C – REAL (KIND=nag_wp)Input On entry: the value $c$, the $\left(2,2\right)$ element of the input matrix. None. Not applicable.
{}
# Definition:Imperial/Area/Square Yard (Redirected from Definition:Square Yard) One square yard is equal to a square of side $1$ yard in length. $\ds$ $\ds 1$ square yard $\ds$ $=$ $\ds 9 = 3^2$ square feet $\ds$ $=$ $\ds 1296 = 36^2$ square inches
{}
Specify a search specify_search(date, query, databases, fields) Arguments date The date that the search is conducted. query The query used in the search (as specified using the functions for queries, e.g. query_full()). databases The databases that were searched, as a named list of vectors, where each vector's name is the name of the interface used to access the databases in the vector. fields The fields in which the query terms were searched. Value An mbf_search object with the search specifications.
{}
# 3.5.1.1.27 Log ## Description This function returns the base 10 logarithm value of x. Note: The LabTalk log function is based on 10 while the Origin C log function is based on e. The Origin C base 10 logarithmic function is log10. ## Syntax double log(double x) ## Parameter x can be any positive number you want to calculate the base 10 logarithm. ## Return Returns the base 10 logarithm value of x. ## Example aa = log(10); aa = ; //1 bb = log(20); bb = ; //1.301029995664
{}
# Rational Number Expressible as Sum of Reciprocals of Distinct Squares/Mistake ## Source Work The Puzzles: Egyptian Fractions ## Mistake The sum of the series $1 + 1 / 2^2 + 1 / 3^2 + 1 / 4^2 \ldots = \pi^2 / 6$, so the sum of different Egyptian fractions whose denominators are squares cannot exceed $\pi^2 / 6$, but might equal, for example, $\frac 1 2$. ## Correction It is implicit that $1$ is not included in the set of Egyptian fractions. We have that: $\ds \dfrac {\pi^2} 6$ $\approx$ $\ds 1.6449 \ldots$ $\ds \leadsto \ \$ $\ds \dfrac {\pi^2} 6 - 1$ $\approx$ $\ds 0.6449 \ldots$ Hence the sentence should end: ... cannot exceed $\pi^2 / 6 - 1$, but might equal, for example, $\frac 1 2$.
{}
# Area of a regular hexagon via area of triangles Problem: Find the area of a regular hexagon whose sides measures 5 cm Sol'n 1: I can cut the hexagon into 6 small triangles, so the area of triangle times 6 will be equal to the area of the polygon. Since the triangles are equilateral I can use the formula for it. Area of triangle = $\frac{\sqrt{3}}{4}(5)^2= 10.83$ Area of hexagon = (6)(10.83) = 64.95 Sol'n 2: Using the formula $\frac{1}{2}(base)(height)$ for the area of triangle. The angle of triangle (angle at the radius) is equal to 60 degrees (From 360 / 6). Cutting the triangle into half (to get the base and height) will result to an angle of 30 degrees and base of 2.5. To get the height: $tan(30) = \frac{2.5}{height} = 4.33$, so the area of the triangle: $\frac{1}{2}(2.5)(4.33) = 5.41$ Area of hexagon = (6)(5.41) = 32.475 which does not equal to the area in solution 1. Question: I noticed that the area calculated on solution 2 is half the area calculated in solution 1. I don't know why. I don't think plugging in the original length of the base is right or logical? I have answered a related problem like this and I have gotten the polygon's area with 1/2(base)(height) and not plugging the original base length back to the formula of area after getting the height. Topic is easy but I don't get why I get so confused. :( Any help will be appreciated. You've created your diagram like this, calculated the area of the triangle I've labelled, and then multiplied that area by $6$. Notice however, by your divisions, we've split the hexagon into $12$ triangles, so we would need to multiply the area by $12$, not $6$, which explains why your area for Sol $2$ is half the area for Sol $1$ (which is correct) The area of one triangle is given by $\frac 12\cdot2.5\cdot2.5\tan(60)=\frac{25\sqrt3}{8}$. Multiplying this area by $12$ (the amount of triangles) gives us $\frac{75\sqrt3}{2}\approx64.9519$ • Please excuse the awful quality of the diagram but I hope that it is clear enough – Rhys Hughes May 19 '18 at 14:29 • I answered a similar problem. Find the area of a regular octagon inscribed in a circle whose radius is 12. I also dissected the octagon into 8 triangles and like in the problem above, I formed a right triangle with hypotenuse = 12, angle = 22.5 and base = x/2. The calculated area of the octagon using this method equals the calculated area using the other triangle formula: $absin\theta$ and I did not need to multiply the area by 16 but just by 8 which confuses me. – Jayce May 19 '18 at 14:36 • Typo on my comment above. Should be $\frac{1}{2}absin\theta$. – Jayce May 19 '18 at 14:52 • Octagons have a regular angle of $135^0$. When you divide the octagon into eight triangles, you get an angle of $45^0$ at the centre. Thus for the whole octagon you should use $8\cdot\frac{1}{2}\cdot12^2\cdot\sin45=288\sqrt2 \approx407.2935$ – Rhys Hughes May 19 '18 at 14:58 • You are right. Try solving it via $\frac{1}{2}(base)(height)$. It will result to 407.29 even though the area is only multiplied by 8 which should be by 16 as per your explanation. I used 22.5 as the angle for the right triangle, "x/2" for the base and the hypotenuse, 12. Then Via "soh-cah-toa", base is 9.184 and height is 11.087. Plugging it in: $\frac{1}{2}(9.184)(11.087) = 50.911$ then multiplied by 8 which equals to 407.29, the same with $absin\theta$ but I only multiplied it by 8 and not by 16. I need an answer for this, so I can move on. :( – Jayce May 19 '18 at 15:14 Solution 1 gives the correct answer. In solution 2, the triangle that you're calculating is actually a right-angled triangle with interior angles $30°$, $60°$ and $90°$, and it is in fact half of the triangle calculated in solution 1. This explains why the numerical result obtained in solution 2 is half of that in solution 1. • I answered a similar problem. Find the area of a regular octagon inscribed in a circle whose radius is 12. I used 2 formulas of triangle: $\frac{1}{2}absin\theta$ and $\frac{1}{2}(base)(height)$. For $\frac{1}{2}(base)(height)$, I did the same thing, I formed a right triangle to get the height then calculated the area then multiplied it by 8 to get the octagon area. This equals the area calculated using $\frac{1}{2}absin\theta$. It equals even though I multiplied it just by 8 which is supposed to be by 16 just like you explained. :( – Jayce May 19 '18 at 14:50 • The solution for the above, @The right triangle: angle = 22.5 (From 360/8 = 45/2 = 22.5), side = 12 (From the given radius) and base = x/2. Via "soh-cah-toa", base = 9.184 and height = 11.087. Plugging it into $8*[\frac{1}{2}(base)(height)] = 407.89$ which is the area of octagon. Via $\frac{1}{2}absin\theta$ where a = b = 12 and angle = 45 (from 360/8). Plugging it in the formula and multiplying it by 8 to get the octagon area, we get 407.29. The areas are equal even though in solution 1, I just multiplied it by 8. – Jayce May 19 '18 at 15:05
{}
# Logic learning machine Logic Learning Machine (LLM) is a machine learning method based on the generation of intelligible rules. LLM is an efficient implementation of the Switching Neural Network (SNN) paradigm,[1] developed by Marco Muselli, Senior Researcher at the Italian National Research Council CNR-IEIIT in Genoa. Logic Learning Machine is implemented in the Rulex suite. LLM has been employed in different fields, including orthopaedic patient classification,[2] DNA microarray analysis [3] and Clinical Decision Support System.[4] ## History The Switching Neural Network approach was developed in the 1990s to overcome the drawbacks of the most commonly used machine learning methods. In particular, black box methods, such as multilayer perceptron and support vector machine, had good accuracy but could not provide deep insight into the studied phenomenon. On the other hand, decision trees were able to describe the phenomenon but often lacked accuracy. Switching Neural Networks made use of Boolean algebra to build sets of intelligible rules able to obtain very good performance. In 2014, an efficient version of Switching Neural Network was developed and implemented in the Rulex suite with the name Logic Learning Machine.[5] Also a LLM version devoted to regression problems was developed. ## General Like other machine learning methods, LLM uses data to build a model able to perform a good forecast about future behaviors. LLM starts from a table including a target variable (output) and some inputs and generates a set of rules that return the output value ${\displaystyle y}$ corresponding to a given configuration of inputs. A rule is written in the form: ${\displaystyle {\textbf {if}}{\text{ }}premise{\text{ }}{\textbf {then}}{\text{ }}consequence}$ where consequence contains the output value whereas premise includes one or more conditions on the inputs. According to the input type, conditions can have different forms: • for categorical variables the input value must be in a given subset :${\displaystyle x_{1}\in \{A,B,C,...\}}$. • for ordered variables the condition is written as an inequality or an interval: ${\displaystyle x_{2}\leq \alpha }$ or ${\displaystyle \beta \leq x_{3}\leq \gamma }$ A possible rule is therefore in the form ${\displaystyle {\textbf {if}}{\text{ }}x_{1}\in \{A,B,C,...\}{\text{ AND }}x_{2}\leq \alpha {\text{ AND }}\beta \leq x_{3}\leq \gamma {\text{ }}{\textbf {then}}{\text{ }}y={\bar {y}}}$ ## Types According to the output type, different versions of Logic Learning Machine have been developed: • Logic Learning Machine for classification, when the output is a categorical variable, which can assume values in a finite set • Logic Learning Machine for regression, when the output is an integer or real number. ## References 1. ^ Muselli, Marco (2006). "Switching Neural Networks: A new connectionist model for classification" (PDF). WIRN 2005 and NAIS 2005, Lecture Notes on Computer Science. 3931: 23–30. 2. ^ Mordenti, M.; Ferrari, E.; Pedrini, E.; Fabbri, N.; Campanacci, L.; Muselli, M.; Sangiorgi, L. (2013). "Validation of a New Multiple Osteochondromas Classification Through Switching Neural Networks". American Journal of Medical Genetics Part A. 161: 556–560. doi:10.1002/ajmg.a.35819. PMID 23401177. 3. ^ Cangelosi, D.; Muselli, M.; Blengio, F.; Becherini, P.; Versteeg, R.; Conte, M.; Varesio, L. (2013). "Use of Attribute Driven Incremental Discretization and Logic Learning Machine to build a prognostic classifier for neuroblastoma patients". BITS2013. 4. ^ Parodi, S.; Filiberti, R.; Marroni, P.; Montani, E.; Muselli, M. (2014). "Differential diagnosis of pleural mesothelioma using Logic Learning Machine". BITS2014. 5. ^ "Rulex: a software for knowledge extraction from data". Italian National Research Council. Retrieved 7 March 2015.
{}
# Exceptional isomorphism In mathematics, an exceptional isomorphism, also called an accidental isomorphism, is an isomorphism between members ai and bj of two families (usually infinite) of mathematical objects, that is not an example of a pattern of such isomorphisms.[note 1] These coincidences are at times considered a matter of trivia,[1] but in other respects they can give rise to other phenomena, notably exceptional objects.[1] In the below, coincidences are listed in all places they occur. ## Groups ### Finite simple groups The exceptional isomorphisms between the series of finite simple groups mostly involve projective special linear groups and alternating groups, and are:[1] • $L_2(4) \cong L_2(5) \cong A_5,$ the smallest non-abelian simple group (order 60); • $L_2(7) \cong L_3(2),$ the second-smallest non-abelian simple group (order 168) – PSL(2,7); • $L_2(9) \cong A_6,$ • $L_4(2) \cong A_8,$ • $\operatorname{PSU}_4(2) \cong \operatorname{PSp}_4(3),$ between a projective special orthogonal group and a projective symplectic group. ### Groups of Lie type In addition to the aforementioned, there are some isomorphisms involving SL, PSL, GL, PGL, and the natural maps between these. For example, the groups over $\mathbf{F}_5$ have a number of exceptional isomorphisms: ### Alternating groups and symmetric groups The compound of five tetrahedra expresses the exceptional isomorphism between the icosahedral group and the alternating group on five letters. There are coincidences between alternating groups and small groups of Lie type: • $L_2(4) \cong L_2(5) \cong A_5,$ • $L_2(9) \cong Sp_4(2)' \cong A_6,$ • $Sp_4(2) \cong S_6,$ • $L_4(2) \cong O_6(+,2)' \cong A_8,$ • $O_6(+,2) \cong S_8.$ These can all be explained in a systematic way by using linear algebra (and the action of $S_n$ on affine $n$-space) to define the isomorphism going from the right side to the left side. (The above isomorphisms for $A_8$ and $S_8$ are linked via the exceptional isomorphism $SL_4/\mu_2 \cong SO_6$.) There are also some coincidences with symmetries of regular polyhedra: the alternating group A5 agrees with the icosahedral group (itself an exceptional object), and the double cover of the alternating group A5 is the binary icosahedral group. ### Cyclic groups Cyclic groups of small order especially arise in various ways, for instance: • $C_2 \cong \{\pm1\} \cong \operatorname{O}(1) \cong \operatorname{Spin}(1) \cong \mathbb Z^*$, the last being the group of units of the integers ### Spheres The spheres S0, S1, and S3 admit group structures, which arise in various ways: • $S^0\cong\operatorname{O}(1)$, • $S^1\cong\operatorname{SO}(2)\cong\operatorname{U}(1)\cong\operatorname{Spin}(2)$, • $S^3\cong\operatorname{Spin}(3)\cong\operatorname{SU}(2)\cong\operatorname{Sp}(1)$. ### Coxeter groups The exceptional isomorphisms of connected Dynkin diagrams. There are some exceptional isomorphisms of Coxeter diagrams, yielding isomorphisms of the corresponding Coxeter groups and of polytopes realizing the symmetries. These are: • A2 = I2(2) (2-simplex is regular 3-gon/triangle); • BC2 = I2(4) (2-cube (square) = 2-cross-polytope (diamond) = regular 4-gon) • A3 = D3 (3-simplex (tetrahedron) is 3-demihypercube (demicube), as per diagram) • A1 = B1 = C1 (= D1?) • D2 = A1 × A1 • A4 = E4 • D5 = E5 Closely related ones occur in Lie theory for Dynkin diagrams. ## Lie theory In low dimensions, there are isomorphisms among the classical Lie algebras and classical Lie groups called accidental isomorphisms. For instance, there are isomorphisms between low-dimensional spin groups and certain classical Lie groups, due to low-dimensional isomorphisms between the root systems of the different families of simple Lie algebras, visible as isomorphisms of the corresponding Dynkin diagrams: • Trivially, A0 = B0 = C0 = D0 • A1 = B1 = C1 , or $\mathfrak{sl}_2 \cong \mathfrak{so}_3 \cong \mathfrak{sp}_1$ • B2 = C2, or $\mathfrak{so}_5 \cong \mathfrak{sp}_2$ • D2 = A1 × A1, or $\mathfrak{so}_{4} \cong \mathfrak{sl}_2 \oplus \mathfrak{sl}_2$; note that these are disconnected, but part of the D-series • A3 = D3 $\mathfrak{sl}_4 \cong \mathfrak{so}_6$ • A4 = E4; the E-series usually starts at 6, but can be started at 4, yielding isomorphisms • D5 = E5 Spin(1) = O(1) Spin(2) = U(1) = SO(2) Spin(3) = Sp(1) = SU(2) Spin(4) = Sp(1) × Sp(1) Spin(5) = Sp(2) Spin(6) = SU(4)
{}
# How do you factor completely 4y^2-36? $4 \left(y - 3\right) \left(y + 3\right)$ $4 {y}^{2} - 36$ = $4 \left({y}^{2} - 9\right)$ = $4 \left(y - 3\right) \left(y + 3\right)$
{}
# How to convert the new digital on air signal for old tv system? 1. Mar 22, 2013 ### yungman I always record all my tv programs and watch it in much later time. I am actually like a year and half behind the current schedules ( I just started the 2011-2012 season!!). The advantages are I can skip commercials and the most important thing, I can watch the coat hanger season finale and watch the result without waiting!!!:rofl: Without a big antenna up on top of the roof, the tv reception in my area is bad. Currently I have Direct tv with 4 stations so I can set up to 4 DVD recorders at one time. But in reality, 90% of the show I watch are available on air. I am thinking about putting some money to install a roof antenna and buy a converter box so I can tape some of the show directly from air and cut down on the number of Direct tv stations as they raise the price for each additional station. I don't exactly want to get into DVR using hard drive as it is too important that I don't want to risk one hard disk failure and lost the whole season's recording, I rather have them in separate DVDs. So.......... I want to install a new Digital antenna and convert the signal so I can use it for the older VCR and non HD DVD recorders. I know I need to first install a HD digital tv antenna, then buy a converter box. My question is if I have multiple VCRs and DVD recorders in different room, do I need to buy multiple converter boxes one for each room. How does the converter box work? Do you need to tune the converter box to particular channel to watch ( eg. channel 7 for ABC here). Or the converter box just convert the RF signals of all channels and feed it to the tv tuner, then you can tune to a particular channel with the VCR or tv. If I need one converter box in each room, then the cost is going to have to add on top of the antenna. I am first doing a cost analysis to see whether it's even worth considering. Thanks Alan 2. Mar 22, 2013 ### jim hardy Digital TV mostly uses the old UHF frequencies plus some a bit higher. A very few stations stayed on their old VHF channels. So your old antenna will do fine. At my place in Idaho i made a simple folded dipole and mounted it to rooftop airconditioner unit. No holes in the roof. At those frequencies a half wave is only about ten inches. Used 1" PVC pipe and #10 solid copper wire, and a 300::75 ohm balun TV antenna transformer(50 cents at thrift shop, $5 at Radio shack). I made mine 3/2 wave for average of my local stations, because most of them were tolerably close in frequency. Folded dipole is fairly broad. That way it worked out ~1/2 wave for one of them that was way lower. My antenna is about 27 inches. Works great. Gets about ten stations. However - the TV stations kept their old channel number - the channel number is no longer associated with the frequency. So you need to find on web a listing of stations in your area and what is the actual frequency they broadcast on. This site used to be great. Not what it used to be but I just tried it - give it your address and it tells you what direction to the stations and gives both their true RF channel number and the channel number they pretend to be. http://www.antennaweb.org/ Google used to have a page for aiming your antenna that gave you a satellite view of your house with pointers to local stations but I cannot find it anymore. In my neck of the woods, old channel 19 is now on RF channel 20 but channel 8 stayed on RF channel 8 . So here in Arkanasas I still need a looonnggg antenna. I had a$39 GE converter box that scans and tells you what actual frequencies it found stations on. The converter box receives one channel and puts out RF or NTSC composite.. You tune it with a remote. If you find a box with buttons it's handy because the remotes always hide someplace. You might find your local channel seven on the old channel 7 VHF frequency or you might find it anywhere else. If antennaweb isn't helpful, try these... http://transition.fcc.gov/mb/engineering/maps/ this one looks pretty good ! gives same as antennaweb but easier. http://transition.fcc.gov/dtv/markets/ http://www.global-cm.net/OFFAIRLOCALTELEVISION.html good luck old jim 3. Mar 22, 2013 ### yungman Thanks Jim for all the info. I don't have an antenna on the roof, so that money has to be spent. From what you described, the converter box actually is the one to tune to the specific channel. This mean if I have 4 recorder stations, I do need 4 individual converters to tune to 4 individual channel. Last edited: Mar 22, 2013 4. Mar 22, 2013 ### jim hardy Yes, I am unaware of a tuner that'll receive multiple channels at once. Check local thrift shops. Thanks Alan
{}
# Closures Closures are self-contained blocks of functionality that can be passed around and used in your code. Closures in Swift are similar to blocks in C and Objective-C and to lambdas in other programming languages. Closures can capture and store references to any constants and variables from the context in which they are defined. This is known as closing over those constants and variables. Swift handles all of the memory management of capturing for you. Don’t worry if you are not familiar with the concept of capturing. It is explained in detail below in Capturing Values. Global and nested functions, as introduced in Functions, are actually special cases of closures. Closures take one of three forms: • Global functions are closures that have a name and do not capture any values. • Nested functions are closures that have a name and can capture values from their enclosing function. • Closure expressions are unnamed closures written in a lightweight syntax that can capture values from their surrounding context. Swift’s closure expressions have a clean, clear style, with optimizations that encourage brief, clutter-free syntax in common scenarios. These optimizations include: • Inferring parameter and return value types from context • Implicit returns from single-expression closures • Shorthand argument names • Trailing closure syntax ## Closure Expressions Nested functions, as introduced in Nested Functions, are a convenient means of naming and defining self-contained blocks of code as part of a larger function. However, it is sometimes useful to write shorter versions of function-like constructs without a full declaration and name. This is particularly true when you work with functions or methods that take functions as one or more of their arguments. Closure expressions are a way to write inline closures in a brief, focused syntax. Closure expressions provide several syntax optimizations for writing closures in a shortened form without loss of clarity or intent. The closure expression examples below illustrate these optimizations by refining a single example of the sorted(by:) method over several iterations, each of which expresses the same functionality in a more succinct way. ### The Sorted Method Swift’s standard library provides a method called sorted(by:), which sorts an array of values of a known type, based on the output of a sorting closure that you provide. Once it completes the sorting process, the sorted(by:) method returns a new array of the same type and size as the old one, with its elements in the correct sorted order. The original array is not modified by the sorted(by:) method. The closure expression examples below use the sorted(by:) method to sort an array of String values in reverse alphabetical order. Here’s the initial array to be sorted: let names = ["Chris", "Alex", "Ewa", "Barry", "Daniella"] The sorted(by:) method accepts a closure that takes two arguments of the same type as the array’s contents, and returns a Bool value to say whether the first value should appear before or after the second value once the values are sorted. The sorting closure needs to return true if the first value should appear before the second value, and false otherwise. This example is sorting an array of String values, and so the sorting closure needs to be a function of type (String, String) -> Bool. One way to provide the sorting closure is to write a normal function of the correct type, and to pass it in as an argument to the sorted(by:) method: func backward(_ s1: String, _ s2: String) -> Bool { return s1 > s2 } var reversedNames = names.sorted(by: backward) // reversedNames is equal to ["Ewa", "Daniella", "Chris", "Barry", "Alex"] If the first string (s1) is greater than the second string (s2), the backward(_:_:) function will return true, indicating that s1 should appear before s2 in the sorted array. For characters in strings, “greater than” means “appears later in the alphabet than”. This means that the letter "B" is “greater than” the letter "A", and the string "Tom" is greater than the string "Tim". This gives a reverse alphabetical sort, with "Barry" being placed before "Alex", and so on. However, this is a rather long-winded way to write what is essentially a single-expression function (a > b). In this example, it would be preferable to write the sorting closure inline, using closure expression syntax. ### Closure Expression Syntax Closure expression syntax has the following general form: { (parameters) -> return type in statements } The parameters in closure expression syntax can be in-out parameters, but they can’t have a default value. Variadic parameters can be used if you name the variadic parameter. Tuples can also be used as parameter types and return types. The example below shows a closure expression version of the backward(_:_:) function from above: reversedNames = names.sorted(by: { (s1: String, s2: String) -> Bool in return s1 > s2 }) Declaration of parameters and return type for this inline closure is identical to the declaration from the backward(_:_:) function. In both cases, it is written as (s1: String, s2: String) -> Bool. However, for the inline closure expression, the parameters and return type are written inside the curly braces, not outside of them. The start of the closure’s body is introduced by the in keyword. This keyword indicates that the definition of the closure’s parameters and return type has finished, and the body of the closure is about to begin. Because the body of the closure is so short, it can even be written on a single line: reversedNames = names.sorted(by: { (s1: String, s2: String) -> Bool in return s1 > s2 } ) This illustrates that the overall call to the sorted(by:) method has remained the same. A pair of parentheses still wrap the entire argument for the method. However, that argument is now an inline closure. ### Inferring Type From Context Because the sorting closure is passed as an argument to a method, Swift can infer the types of its parameters and the type of the value it returns. The sorted(by:) method is being called on an array of strings, so its argument must be a function of type (String, String) -> Bool. This means that the (String, String) and Bool types do not need to be written as part of the closure expression’s definition. Because all of the types can be inferred, the return arrow (->) and the parentheses around the names of the parameters can also be omitted: reversedNames = names.sorted(by: { s1, s2 in return s1 > s2 } ) It is always possible to infer the parameter types and return type when passing a closure to a function or method as an inline closure expression. As a result, you never need to write an inline closure in its fullest form when the closure is used as a function or method argument. Nonetheless, you can still make the types explicit if you wish, and doing so is encouraged if it avoids ambiguity for readers of your code. In the case of the sorted(by:) method, the purpose of the closure is clear from the fact that sorting is taking place, and it is safe for a reader to assume that the closure is likely to be working with String values, because it is assisting with the sorting of an array of strings. ### Implicit Returns from Single-Expression Closures Single-expression closures can implicitly return the result of their single expression by omitting the return keyword from their declaration, as in this version of the previous example: reversedNames = names.sorted(by: { s1, s2 in s1 > s2 } ) Here, the function type of the sorted(by:) method’s argument makes it clear that a Bool value must be returned by the closure. Because the closure’s body contains a single expression (s1 > s2) that returns a Bool value, there is no ambiguity, and the return keyword can be omitted. Swift automatically provides shorthand argument names to inline closures, which can be used to refer to the values of the closure’s arguments by the names $0, $1, $2, and so on. If you use these shorthand argument names within your closure expression, you can omit the closure’s argument list from its definition, and the number and type of the shorthand argument names will be inferred from the expected function type. The in keyword can also be omitted, because the closure expression is made up entirely of its body: reversedNames = names.sorted(by: {$0 > $1 } ) Here, $0 and $1 refer to the closure’s first and second String arguments. ### Operator Methods There’s actually an even shorter way to write the closure expression above. Swift’s String type defines its string-specific implementation of the greater-than operator (>) as a method that has two parameters of type String, and returns a value of type Bool. This exactly matches the method type needed by the sorted(by:) method. Therefore, you can simply pass in the greater-than operator, and Swift will infer that you want to use its string-specific implementation: reversedNames = names.sorted(by: >) For more about operator method, see Operator Methods. ## Trailing Closures If you need to pass a closure expression to a function as the function’s final argument and the closure expression is long, it can be useful to write it as a trailing closure instead. A trailing closure is written after the function call’s parentheses, even though it is still an argument to the function. When you use the trailing closure syntax, you don’t write the argument label for the closure as part of the function call. func someFunctionThatTakesAClosure(closure: () -> Void) { // function body goes here } // Here's how you call this function without using a trailing closure: someFunctionThatTakesAClosure(closure: { // closure's body goes here }) // Here's how you call this function with a trailing closure instead: someFunctionThatTakesAClosure() { // trailing closure's body goes here } The string-sorting closure from the Closure Expression Syntax section above can be written outside of the sorted(by:) method’s parentheses as a trailing closure: reversedNames = names.sorted() {$0 > $1 } If a closure expression is provided as the function or method’s only argument and you provide that expression as a trailing closure, you do not need to write a pair of parentheses () after the function or method’s name when you call the function: reversedNames = names.sorted {$0 > \$1 } Trailing closures are most useful when the closure is sufficiently long that it is not possible to write it inline on a single line. As an example, Swift’s Array type has a map(_:) method which takes a closure expression as its single argument. The closure is called once for each item in the array, and returns an alternative mapped value (possibly of some other type) for that item. The nature of the mapping and the type of the returned value is left up to the closure to specify. After applying the provided closure to each array element, the map(_:) method returns a new array containing all of the new mapped values, in the same order as their corresponding values in the original array. Here’s how you can use the map(_:) method with a trailing closure to convert an array of Int values into an array of String values. The array [16, 58, 510] is used to create the new array ["OneSix", "FiveEight", "FiveOneZero"]: et digitNames = [ 0: "Zero", 1: "One", 2: "Two", 3: "Three", 4: "Four", 5: "Five", 6: "Six", 7: "Seven", 8: "Eight", 9: "Nine" ] let numbers = [16, 58, 510] The code above creates a dictionary of mappings between the integer digits and English-language versions of their names. It also defines an array of integers, ready to be converted into strings. You can now use the numbers array to create an array of String values, by passing a closure expression to the array’s map(_:) method as a trailing closure: let strings = numbers.map { (number) -> String in var number = number var output = "" repeat { output = digitNames[number % 10]! + output number /= 10 } while number > 0 return output } // strings is inferred to be of type [String] // its value is ["OneSix", "FiveEight", "FiveOneZero"] The map(_:) method calls the closure expression once for each item in the array. You do not need to specify the type of the closure’s input parameter, number, because the type can be inferred from the values in the array to be mapped. In this example, the variable number is initialized with the value of the closure’s number parameter, so that the value can be modified within the closure body. (The parameters to functions and closures are always constants.) The closure expression also specifies a return type of String, to indicate the type that will be stored in the mapped output array. The closure expression builds a string called output each time it is called. It calculates the last digit of number by using the remainder operator (number % 10), and uses this digit to look up an appropriate string in the digitNames dictionary. The closure can be used to create a string representation of any integer greater than zero. The call to the digitNames dictionary’s subscript is followed by an exclamation mark (!), because dictionary subscripts return an optional value to indicate that the dictionary lookup can fail if the key does not exist. In the example above, it is guaranteed that number % 10 will always be a valid subscript key for the digitNames dictionary, and so an exclamation mark is used to force-unwrap the String value stored in the subscript’s optional return value. The string retrieved from the digitNames dictionary is added to the front of output, effectively building a string version of the number in reverse. (The expression number % 10 gives a value of 6 for 16, 8 for 58, and 0 for 510.) The number variable is then divided by 10. Because it is an integer, it is rounded down during the division, so 16 becomes 1, 58 becomes 5, and 510 becomes 51. The process is repeated until number is equal to 0, at which point the output string is returned by the closure, and is added to the output array by the map(_:) method. The use of trailing closure syntax in the example above neatly encapsulates the closure’s functionality immediately after the function that closure supports, without needing to wrap the entire closure within the map(_:) method’s outer parentheses. ## Capturing Values A closure can capture constants and variables from the surrounding context in which it is defined. The closure can then refer to and modify the values of those constants and variables from within its body, even if the original scope that defined the constants and variables no longer exists. In Swift, the simplest form of a closure that can capture values is a nested function, written within the body of another function. A nested function can capture any of its outer function’s arguments and can also capture any constants and variables defined within the outer function. Here’s an example of a function called makeIncrementer, which contains a nested function called incrementer. The nested incrementer() function captures two values, runningTotal and amount, from its surrounding context. After capturing these values, incrementer is returned by makeIncrementer as a closure that increments runningTotal by amount each time it is called. func makeIncrementer(forIncrement amount: Int) -> () -> Int { var runningTotal = 0 func incrementer() -> Int { runningTotal += amount return runningTotal } return incrementer } The return type of makeIncrementer is () -> Int. This means that it returns a function, rather than a simple value. The function it returns has no parameters, and returns an Int value each time it is called. To learn how functions can return other functions, see Function Types as Return Types. The makeIncrementer(forIncrement:) function defines an integer variable called runningTotal, to store the current running total of the incrementer that will be returned. This variable is initialized with a value of 0. The makeIncrementer(forIncrement:) function has a single Int parameter with an argument label of forIncrement, and a parameter name of amount. The argument value passed to this parameter specifies how much runningTotal should be incremented by each time the returned incrementer function is called. The makeIncrementer function defines a nested function called incrementer, which performs the actual incrementing. This function simply adds amount to runningTotal, and returns the result. When considered in isolation, the nested incrementer() function might seem unusual: func incrementer() -> Int { runningTotal += amount return runningTotal } The incrementer() function doesn’t have any parameters, and yet it refers to runningTotal and amount from within its function body. It does this by capturing a reference to runningTotal and amount from the surrounding function and using them within its own function body. Capturing by reference ensures that runningTotal and amount do not disappear when the call to makeIncrementer ends, and also ensures that runningTotal is available the next time the incrementer function is called. As an optimization, Swift may instead capture and store a copy of a value if that value is not mutated by a closure, and if the value is not mutated after the closure is created. Swift also handles all memory management involved in disposing of variables when they are no longer needed. Here’s an example of makeIncrementer in action: let incrementByTen = makeIncrementer(forIncrement: 10) This example sets a constant called incrementByTen to refer to an incrementer function that adds 10 to its runningTotal variable each time it is called. Calling the function multiple times shows this behavior in action: incrementByTen() // returns a value of 10 incrementByTen() // returns a value of 20 incrementByTen() // returns a value of 30 If you create a second incrementer, it will have its own stored reference to a new, separate runningTotal variable: let incrementBySeven = makeIncrementer(forIncrement: 7) incrementBySeven() // returns a value of 7 Calling the original incrementer (incrementByTen) again continues to increment its own runningTotal variable, and does not affect the variable captured by incrementBySeven: incrementByTen() // returns a value of 40 If you assign a closure to a property of a class instance, and the closure captures that instance by referring to the instance or its members, you will create a strong reference cycle between the closure and the instance. Swift uses capture lists to break these strong reference cycles. For more information, see Strong Reference Cycles for Closures. ## Closures Are Reference Types In the example above, incrementBySeven and incrementByTen are constants, but the closures these constants refer to are still able to increment the runningTotal variables that they have captured. This is because functions and closures are reference types. Whenever you assign a function or a closure to a constant or a variable, you are actually setting that constant or variable to be a reference to the function or closure. In the example above, it is the choice of closure that incrementByTen refers to that is constant, and not the contents of the closure itself. This also means that if you assign a closure to two different constants or variables, both of those constants or variables refer to the same closure. let alsoIncrementByTen = incrementByTen alsoIncrementByTen() // returns a value of 50 incrementByTen() // returns a value of 60 The example above shows that calling alsoIncrementByTen is the same as calling incrementByTen. Because both of them refer to the same closure, they both increment and return the same running total. ## Escaping Closures A closure is said to escape a function when the closure is passed as an argument to the function, but is called after the function returns. When you declare a function that takes a closure as one of its parameters, you can write @escaping before the parameter’s type to indicate that the closure is allowed to escape. One way that a closure can escape is by being stored in a variable that is defined outside the function. As an example, many functions that start an asynchronous operation take a closure argument as a completion handler. The function returns after it starts the operation, but the closure isn’t called until the operation is completed — the closure needs to escape, to be called later. For example: var completionHandlers: [() -> Void] = [] func someFunctionWithEscapingClosure(completionHandler: @escaping () -> Void) { completionHandlers.append(completionHandler) } The someFunctionWithEscapingClosure(_:) function takes a closure as its argument and adds it to an array that’s declared outside the function. If you didn’t mark the parameter of this function with @escaping, you would get a compile-time error. Marking a closure with @escaping means you have to refer to self explicitly within the closure. For example, in the code below, the closure passed to someFunctionWithEscapingClosure(_:) is an escaping closure, which means it needs to refer to self explicitly. In contrast, the closure passed to someFunctionWithNonescapingClosure(_:) is a nonescaping closure, which means it can refer to self implicitly. func someFunctionWithNonescapingClosure(closure: () -> Void) { closure() } class SomeClass { var x = 10 func doSomething() { someFunctionWithEscapingClosure { self.x = 100 } someFunctionWithNonescapingClosure { x = 200 } } } let instance = SomeClass() instance.doSomething() print(instance.x) // Prints "200" completionHandlers.first?() print(instance.x) // Prints "100" ## Autoclosures An autoclosure is a closure that is automatically created to wrap an expression that’s being passed as an argument to a function. It doesn’t take any arguments, and when it’s called, it returns the value of the expression that’s wrapped inside of it. This syntactic convenience lets you omit braces around a function’s parameter by writing a normal expression instead of an explicit closure. It’s common to call functions that take autoclosures, but it’s not common to implement that kind of function. For example, the assert(condition:message:file:line:) function takes an autoclosure for its condition and message parameters; its condition parameter is evaluated only in debug builds and its message parameter is evaluated only if condition is false. An autoclosure lets you delay evaluation, because the code inside isn’t run until you call the closure. Delaying evaluation is useful for code that has side effects or is computationally expensive, because it lets you control when that code is evaluated. The code below shows how a closure delays evaluation. var customersInLine = ["Chris", "Alex", "Ewa", "Barry", "Daniella"] print(customersInLine.count) // Prints "5" let customerProvider = { customersInLine.remove(at: 0) } print(customersInLine.count) // Prints "5" print("Now serving \(customerProvider())!") // Prints "Now serving Chris!" print(customersInLine.count) // Prints "4" Even though the first element of the customersInLine array is removed by the code inside the closure, the array element isn’t removed until the closure is actually called. If the closure is never called, the expression inside the closure is never evaluated, which means the array element is never removed. Note that the type of customerProvider is not String but () -> String — a function with no parameters that returns a string. You get the same behavior of delayed evaluation when you pass a closure as an argument to a function. // customersInLine is ["Alex", "Ewa", "Barry", "Daniella"] func serve(customer customerProvider: () -> String) { print("Now serving \(customerProvider())!") } serve(customer: { customersInLine.remove(at: 0) } ) // Prints "Now serving Alex!" The serve(customer:) function in the listing above takes an explicit closure that returns a customer’s name. The version of serve(customer:) below performs the same operation but, instead of taking an explicit closure, it takes an autoclosure by marking its parameter’s type with the @autoclosure attribute. Now you can call the function as if it took a String argument instead of a closure. The argument is automatically converted to a closure, because the customerProvider parameter’s type is marked with the @autoclosure attribute. // customersInLine is ["Ewa", "Barry", "Daniella"] func serve(customer customerProvider: @autoclosure () -> String) { print("Now serving \(customerProvider())!") } serve(customer: customersInLine.remove(at: 0)) // Prints "Now serving Ewa!" Overusing autoclosures can make your code hard to understand. The context and function name should make it clear that evaluation is being deferred. If you want an autoclosure that is allowed to escape, use both the @autoclosure and @escaping attributes. The @escaping attribute is described above in Escaping Closures. // customersInLine is ["Barry", "Daniella"] var customerProviders: [() -> String] = [] func collectCustomerProviders(_ customerProvider: @autoclosure @escaping () -> String) { customerProviders.append(customerProvider) } collectCustomerProviders(customersInLine.remove(at: 0)) collectCustomerProviders(customersInLine.remove(at: 0)) print("Collected \(customerProviders.count) closures.") // Prints "Collected 2 closures." for customerProvider in customerProviders { print("Now serving \(customerProvider())!") } // Prints "Now serving Barry!" // Prints "Now serving Daniella!" In the code above, instead of calling the closure passed to it as its customerProvider argument, the collectCustomerProviders(_:) function appends the closure to the customerProviders array. The array is declared outside the scope of the function, which means the closures in the array can be executed after the function returns. As a result, the value of the customerProvider argument must be allowed to escape the function’s scope. Page structure
{}
# All Questions 275 views 105 views 23 views ### Log-concave distributions: Weighted sum of pdfs Assuming $f_n(\cdot)$ is a log concave function (e.g., pdf of Gaussian distribution) and $0\le q_n\le 1$ for all $n\le N$, I am trying to find conditions under which the following holds ... 365 views ### Bateman-Horn, continued even further As before, consider the "singular series", which shows up in the Bateman-Horn conjecture: for an irreducible polynomial $f,$ this is equal to $$s(f) = \prod_p \frac{1-\frac{n_f(p)}p}{1-\frac1p},$$ ... 36 views ### upper bound for a convex fractional function [on hold] Consider the following convex fractional function $$f\left( {\bf{x}} \right) = \frac{1}{{1- {\bf{x}}}}$$ where ${1- {\bf{x}}} > 0$. Is it possible to obtain a linear or quadratic upper bound ... 96 views ### Odds of residue being small Given $\mathsf{c\geq1}$, what is the probability that if you choose $\mathsf{A,B,\alpha\in\Bbb N}$ such that $\mathsf{A,B<\alpha<AB}$ holds we will have both ... 81 views Let $\Phi$ be the root system of a split group $G$ over a field $k$. The differentials $d\alpha$ of the roots define a polynomial called the discriminant $$\prod_{\alpha\in\Phi}d\alpha$$ on $\mathfrak ... 1answer 58 views ### Is every pair of writable reals one-tape-ITTM-computable? I've been reading this paper, in which authors prove that not all ITTM-computable functions$\Bbb R\rightarrow\Bbb R$are 1-tape-computable, but if we put some restriction on the output of the ... 2answers 227 views ### cup product and Steenrod operations in Serre spectral sequence Let$F\to E\to B$be a fibration with$B$simply-connected. Suppose all differentials in the cohomology Serre spectral sequence (corresponding to the above fibration) are zero maps. Then as a graded ... 1answer 429 views ### Is an irreducible ideal in$R$also irreducible in$R[x]$? Let$R$be a commutative Noetherian ring and$I\subset R$an ideal that is irreducible in the sense that if$I = J_1 \cap J_2$, then$I=J_1$or$I=J_2$. Is (the ideal generated by)$I$irreducible in ... 1answer 78 views ### Projective dimension of a quotient ring Assume$A$and$B$are commutative algebras with$1$,$B = A[z] = A[Z]/(h(Z))$,$Z$an indeterminate. The first comment in this question says that, if$A$is noetherian, then$pd_{B\otimes_A B}(B) ... Is this a right place to ask help for an exercise? Let $n\geq 2$ be an integer and $D=\mathbb Z[1/n]$. Let $A$ be a complete commutative ring with unit for the $I$-adic topology, where $I$ is an ... Brouwer famously proved, using principles motivated by intuitionistic choice sequences, that every function $\mathbb{R}\to \mathbb{R}$ is continuous. In Sheaves in geometry and logic (section VI.9), ...
{}
# Math Help - disprove a convergence question? 1. ## disprove a convergence question? i know that An->1 i need to prove that (An)^n ->1 but when i construct limit lim (An)^n n->+infinity i get 1^(+infinity) which says that there is no limit what do i do in this case in order to disprove that (An)^n->1 ?? 2. It is true that $\left( {A_n = \sqrt[n]{n}} \right) \to 1$. But what about $\left( {\sqrt[n]{n}} \right)^n \to ?$ 3. where did you find the power of 1/n when i do the limit the base goes to 1 and the power goes to + infinity that is not solvable what to do?? they question says prove/disprove how to disprove 4. Originally Posted by transgalactic i know that An->1 i need to prove that (An)^n ->1 but when i construct limit lim (An)^n n->+infinity i get 1^(+infinity) which says that there is no limit what do i do in this case in order to disprove that (An)^n->1 ?? Why not try this method? Your book ought to show that the Root and Ratio test always yield the same result. So $\limsup \sqrt[n]{A_n}=\limsup\frac{A_{n+1}}{A_n}$ Also let us try a three case scenario again Case #1: $A_n\geqslant A_{n+1}\cdots$ It is clear then from the fact that $A_n\to1$ that there exists some $N$ such that [tex]N\leqslant{n}[/math[ implies $A_n\geqslant{1}$ From there it is clear then that $1\leqslant{A_n}\leqslant{A_n^n}$ Or $1\leqslant\sqrt[n]{A_n}\leqslant{A_n}$ Case #2: $A_n\leqslant{A_{n+1}}\cdots$ From here it is clear that there exists a $N$ such that $N\leqslant{n}$ implies $0\leqslant{A_n}\leqslant{1}$ And it should be clear then that $A_n^n\leqslant{A_n}\leqslant{1}$ Or $A_n\leqslant\sqrt[n]{A_n}\leqslant{1}$ Case #3: This is when $A_n=A_{n+1}\cdots$ where the conclusion readily follows. 5. how did you came to the conclusion that my limit equal this? $ \limsup \sqrt[n]{A_n}=\limsup\frac{A_{n+1}}{A_n}$ 6. Originally Posted by transgalactic how did you came to the conclusion that my limit equal this? $ \limsup \sqrt[n]{A_n}=\limsup\frac{A_{n+1}}{A_n}$ Im sorry when I saw Plato's post I mistakenly believed you were asking to prove that $A_n\to1\implies\sqrt[n]{A_n}\to{1}$, forgive me.
{}
# Do H and H form a polar covalent bond? Jan 23, 2018 No. #### Explanation: $H$ and $H$ can form ${H}_{2}$, also known as hydrogen gas. Since both hydrogens have equal electronegativity, the bond is therefore non-polar, and electrons spend equal time around each nucleus. However, since both elements are non-metals, and share electrons in their bonds, it is a covalent bond. So no, it is not a polar covalent bond, but a non-polar covalent bond.
{}
# 7.3 Problem Submission via Condor¶ HTCondor, formerly known as Condor is “a specialized workload management system for compute-intensive jobs.” This tutorial shows how to submit optimization problem to a HTCondor server via OptServer. The idea is very simple: since OptServer executes MOSEK using a simple Python script (solve.py), we can instruct OptServer to use a different script that will interface with HTCondor. To this extent we use the script as in Listing 7.11. Listing 7.11 An example of script to off-load a job from OptServer to a HTCondor server. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import sys import os,os.path import subprocess if __name__ == '__main__': workdir = sys.argv[1] probfile = sys.argv[2] pidfile = os.path.join(workdir,"PID") with open(pidfile,'wt', encoding='ascii') as f: f.write(str(os.getpid)) r = 1 try: r = subprocess.call(['condor_run', os.path.abspath(os.path.join(os.path.dirname(__file__),"solve.py")), workdir, probfile, '-noPID']) finally: try: os.remove(pidfile) except: pass sys.exit(r) The script operates as follows: • lines 10-11: the job PID is stored in a text file called PID in the working directory; • lines 14-24 : a HTCondor process is created, responsible to run the solve.py script. To tell OptServer to use the script in Listing 7.11 instead of the default solve.py, the cmd option (see Section 9) in the configuration file server.conf must be modified accordingly. In this case the script is available in the script directory of the OptServer distribution. Therefore the configuration file can be simply modified changing the cmd option to "cmd" : "${CONFIGDIR}/script/tocondor.py${TASK}",
{}
To get a trial key fill out the form below ** By clicking this button you agree to our Privacy Policy statement Request our prices --Select currency-- USD EUR * By clicking this button you agree to our Privacy Policy statement Free PVS-Studio license for Microsoft MVP specialists ** By clicking this button you agree to our Privacy Policy statement To get the licence for your open-source project, please fill out this form ** By clicking this button you agree to our Privacy Policy statement I am interested to try it on the platforms: Windows Linux macOS PVS-Studio for .NET Core JetBrains Rider ** By clicking this button you agree to our Privacy Policy statement Message submitted. Your message has been sent. We will email you at check your Spam/Junk folder and click the "Not Spam" button for our message. This way, you won't miss messages from our team in the future. > > How to use the PVS-Studio extension for… Introduction Analyzing projects On Windows On Linux and macOS Continuous use of the analyzer in software development Deploying the analyzer in cloud Continuous Integration services Managing analysis results Analyzer diagnostics General Analysis (C++) General Analysis (C#) General Analysis (Java) Diagnosis of micro-optimizations (C++) Diagnosis of 64-bit errors (Viva64, C++) Customer specific requests (C++) MISRA errors AUTOSAR errors OWASP errors (C++) OWASP errors (C#) OWASP errors (Java) Problems related to code analyzer Contents # How to use the PVS-Studio extension for Qt Creator Dec 05 2022 You can view the PVS-Studio reports in Qt Creator with a specialized extension. In this document, you can find instructions on how to install and configure the extension as well as main use case scenarios. Note. As for now, you cannot run the PVS-Studio analysis from Qt Creator directly. You can analyze the code and get a PVS-Studio report using one of the following ways: ## How to install the PVS-Studio extension First, you need to get the right extension version. You can find it in the PVS-Studio installation directory. • For Windows, the default path is 'C:\Program Files (x86)\PVS-Studio'. • For Linux: '$PREFIX/lib/pvs-studio/plugins', where '$PREFIX' is the installation prefix (often it's '/usr'). ### Supported versions of Qt Creator Due to restrictions of the Qt Creator, you can run there only those plugin versions that were created specifically for this IDE. As of now, the following versions are supported: • Qt Creator 8.0.x The 'x' character replaces any digit. That is, supported version 8.0.x means that the plugin works on versions 8.0.0, 8.0.1, and 8.0.2. The target version of Qt Creator is specified at the end of the plugin file's name. For example: 'pvs-studio-qtcreator-plugin_8.0.x.dll'. ### Installation with Qt Creator Wizard To install the PVS-Studio extension, open Qt Creator and select 'Help' -> 'About Plugins': Click on 'Install Plugin...': In the installation wizard, specify the location of the plugin file as well as the installation location. Depending on the platform, the plugin file will have the '.dll' or '.so' extension. After the extension is installed, restart Qt Creator. ### Manual installation To install the plugin manually, copy the plugin file to the directory with plugins for Qt Creator. Depending on the platform, the file will have the '.dll' or '.so' extension. By default, Qt Creator searches for plugins in the following directories: Windows: %Qt_installation_directory%\Tools\QtCreator\lib\qtcreator\plugins %APPDATA%\Local\QtProject\qtcreator\plugins\%qt_creator_version% Linux: /opt/Qt/Tools/QtCreator/lib/qtcreator/plugins ### Manual installation (option two) You can use this option when it's impossible to install the plugin in standard directories. When starting Qt Creator, you can specify additional directories to search for plugins. Specify these directories using the '-pluginpath' flag. For example, you can run the IDE with the following command: • Windows: qtcreator.exe -pluginpath "%path_to_plugin_directory%" • Linux: qtcreator -pluginpath "\$path_to_plugin_directory" ### Installation troubleshooting When you run Qt Creator, you may see a message: the PVS-Studio plugin cannot be loaded because the suitable dependencies cannot be found (as shown below). To fix it, check the plugin version and the Qt Creator version used. You can find the Qt Creator version by going 'Help' -> 'About Qt Creator'. The plugin version is specified in the name of its file. ## Interface The PVS-Studio plugin for Qt Creator integrates into the menu bar of the IDE and the output pane. ### Integration into the menu bar After the plugin is installed, the PVS-Studio item appears in the 'Analyze' dropdown menu. This item contains the following options: Open/Save allows you to upload and save reports. Recent Reports stores a list of last opened reports for quick navigation. By clicking on this menu item, you can start loading the selected file. Help contains links to the documentation and the most important pages of the analyzer website. Options... opens the Qt Creator settings in the PVS-Studio section. ### Integration into the output pane An additional item named 'PVS-Studio' appears in the Qt Creator output pane. The PVS-Studio window has the following items: 1 — report control bar. The first button allows clearing the current table with warnings. The two other buttons (with arrows) allow you to navigate the table. Please note that these buttons don't navigate table rows, but positions. That is, navigating the table will open the files specified in the warnings. 2 — quick filters bar. Contains buttons for displaying additional menu and advanced filters as well as certainty level checkboxes and buttons with warning categories. When you click on the hamburger button, you'll see the following items: • Open/Save allows you to upload/save the report. • Show False Alarms shows/hides warnings marked as false alarms. The number in parenthesis shows the number of false alarms in the current report. When you activate it, an additional column appears in the warning table. • Options... shows the Qt Creator's settings window with the active PVS-Studio section. • Edit Source Tree Root... allows you to quickly change the Source Tree Root settings. When activated, this item calls a dialog window where you can choose the existing directory. Please note that this option is visible only if the report contains warnings with relative paths to files. You can read more about this setting in the additional functionality section. 3 — the output area. This area solely consists of table with warnings. Further we'll describe this item in more detail. 4 — the pane control elements. The first button allows you to expand the output area by height, the second one allows you to hide the pane. 5 — a button to display the PVS-Studio pane. ### Integration into settings Integration into settings consists of adding a new section named "PVS-Studio" into a list of existing sections. The PVS-Studio settings section is divided into 4 subsections (one tab for each section). For more information about the purpose of each section and the settings included in them, see the "How to configure the plugin" section. ## How to work with analysis results Note: the PVS-Studio extension for Qt Creator supports reports only in the JSON format. Depending on the type of the project analyzed and the way of running the analysis, PVS-Studio can generate report in several formats. To display the report in the extension, you need to convert it into the JSON format. To convert the report, use command-line utilities (PlogConverter.exe for Windows and plog-converter for Linux / macOS). These utilities allow not only to convert PVS-Studio reports into different formats, but also process it. For example, filter warnings. Read more about these utilities here. Example of a command to convert the PVS-Studio report to the JSON format using PlogConverter.exe (Windows): PlogConverter.exe path\to\report.plog -t json ^ -n PVS-Studio Example of a command to convert the PVS-Studio report to the JSON format using plog-converter (Linux and macOS): plog-converter path/to/report/file.json -t json \ -o PVS-Studio.json ### How to upload the PVS-Studio JSON report in Qt Creator To view the PVS-Studio report in Qt Creator, open the PVS-Studio pane, click on the hamburger button, and select 'Open/Save' -> 'Open Analysis Report...': You can also open the report using the menu bar: 'Analyze -> PVS-Studio -> Open/Save -> Open Analysis Report...': After you select and upload the report, you'll get the output area and the warnings displayed in a table: ### How to navigate the report The PVS-Studio result output window is designed to simplify navigation through the project code and code fragments containing potential errors. Double-click on the warning in the table to open the position (a file and a code line) to which the warning was issued. Left-click on the table header to sort the contents by the selected column. Right-click on the table header to open the context menu. Using this menu, you can show/hide additional columns as well as display full paths to files in the position column. The warning table supports multiple selection. To activate it, left-click on and hold it while scrolling through the rows. You can also use keyboard shortcuts: • 'Shift+Click 'or 'Shift+arrows' — multiple selection/cancel • 'Ctrl+Click' – single selection/cancel Note: almost all elements in the plugin have tooltips. To see them, hold the cursor over the item for a few seconds. #### Columns in the report and their purpose Level — the unnamed first column. It displays correspondence between the certainty level and color (importance/certainty in the descending order): red — High level, orange — Medium level, yellow — Low level. Star — shows if the warnings is marked as favorite. Click on a cell in this column to mark the warning as favorite or remove the mark. This is helpful when you noticed an interesting warning and would like to return to it later. ID — shows the warning's order number in a report. This is helpful when you need to sort the report in order in which warnings are received from the analyzer. Code — shows which warnings relate to which diagnostics. Click on this cell to see the documentation on the diagnostic rule. CWE — shows the correspondence between diagnostics and the CWE classification. Click on this cell to open the documentation with the description of this security weakness. SAST — shows the diagnostics' compliance with various safety and security standards (SEI CERT, MISRA, AUTOSAR, etc.). Message — the text of a warning issued by the analyzer. Project — name of a project, the analysis of which resulted in a warning. Position — shows the position (file name and a line number, separated by a colon), to which a warning was issued. If you need to view the full path to the file, right-click on the table header and select 'Show full path to file'. If the analyzer warning contains several positions, (...) appears at the end of it. In this case, when you click on the position column, you'll see a list with all additional positions. FA — shows if the warning is marked as false alarm. Note: some columns may be hidden by default. To display/hide them, right-click on the table header. In the context menu, select 'Show Columns' and then select the desired column. When you right-click on any warning, you'll see the context menu with a list of available additional actions: The 'Mark As' menu contains commands to quickly mark or remove the mark from the selected warnings. As for now, you can mark warnings as favorites or as false alarms. Please note that the contents of this menu change depending on the status of selected warnings. The 'Copy to clipboard' menu allows copying information about selected warnings. Contains several sub-items: • All — copies full information about the warning (a diagnostic number, security classifiers, a full analyzer message, a file name, and a string). Note that CWE and/or SAST ids will be included in the message only if the corresponding columns are displayed; • Message — copies only the warning text; • Path to file — copies the full path to file. 'Hide all %N errors' — allows to hide all warnings related to this diagnostic from the report. When you click on this item, a pop-up window appears to confirm the operation. If you confirm the operation, the analyzer messages will be filtered out instantly. 'Don't check files from' — a submenu containing parts of the path to the position's file. Use this item when you need to hide all warnings issued on files from the selected directory. When you select a value, a pop-up window appears to confirm the operation. This window also contains a tip on how to disable this filter: The 'Analyzed source files' menu contains a list of files, analysis of which resulted in this warning. This menu is helpful when warnings were issued on header files. ### How to filter a report The PVS-Studio filtering mechanisms allow you to quickly find and display diagnostic messages separately or in groups. All filtering mechanisms (quick and advanced filters) listed below can be combined with each other and with sorting simultaneously. For example, you can filter messages by level and diagnostic groups, exclude all messages except for those containing specified text, and then sort them by position. #### Quick filters The quick filters bar contains several buttons that allow you to enable/disable displaying warnings from certain diagnostic groups. When the list of active categories changes, all filters are also re-calculated. Note: the button for the 'Fails' group is displayed only if the report contains errors related to the analyzer (Their 'Code' starts with V0..). You can read a detailed description of the certainty levels and diagnostic groups in the documentation section "Getting acquainted with the PVS-Studio static code analyzer on Windows". You can see advanced filters if you click on 'Quick Filters'. The status of the additional filters bar (shown/hidden) does not affect the active filters. That is, you can hide this bar and filters won't be reset. When you activate it, you'll see an additional bar that contains an input field to filter all table columns. The bar also has a button for quick clearing all filters (Clear All). To activate the filter, press 'Enter' after you enter the text in the input field. Please note that some filters support multiple filtering (for example, Code). When you hover the cursor over the input field, a tooltip with this remainder will appear. ## How to configure the plugin To see the settings for the PVS-Studio plugin for Qt Creator IDE, you can choose the PVS-Studio section in the general list of settings; you can also use the 'Options...' menu items of the plugin. The plugin settings are stored in the 'qtcsettings.json' file which is located in: • Windows: '%APPDATA%\PVS-Studio\qtcsettings.json'; • Linux: '~/.config/PVS-Studio/qtcsettings.json'. All plugin settings are divided into 4 tabs: • General — general plugin settings; • Detectable Errors — configuration of active warnings; • Don't Check Files — filtering warnings by path masks and file names; • Keyboard Message Filtering — filtering warnings by keywords. ### The 'General' tab Display false alarms – enables/disables displaying false positive warnings in the report. If you activate this setting, a new column appears in the report table. Save file after False Alarm mark – if active, then after a False Alarm comment is inserted, a changed file is saved. Source Tree Root – contains a path which should be used to open positions that use relative paths. For example, the '\test\mylist.cpp' relative path is written in the warning, while Source Tree Root contains the 'C:\dev\mylib' path. If the plugin tries to go to the position from the warning, the 'C:\dev\mylib\test\mylist.cpp' file will be opened. For a detailed description of using relative paths in the PVS-Studio report files, see here. Help Language – specifies the preferred language of the documentation. This setting is used to open the documentation on the analyzer website. ### The 'Detectable Errors' tab This tab contains a list and a description of all analyzer warnings. In this tab, you can also enable/disable diagnostic groups or separate diagnostic rules. In the upper part of the window, you can do a full-text search for the description of the diagnostics and their numbers. If you click on the diagnostic's code, a corresponding documentation will open. If you hover the cursor over the text, you'll see a tooltip with a full text of the diagnostic rule. When you click on 'OK' or 'Apply', a warning table is updated to match the current filters. All diagnostics are divided into groups. You can set the following states for them: • Disabled – a category is disabled and all its warnings will not be shown in the warning list. Also, its button will be hidden from the quick filters bar (except for the General category). • Custom – the category is active and the items have different states. • Show All – activates the category and all its child items. • Hide All – deactivates the category and its child items. The category button remains on the quick filters bar. The full list of diagnostics is available on the "PVS-Studio Messages" page. ### The 'Don't Check Files' tab This tab contains lists for filtering warnings by file names or path masks. If the name or path meets at least one mask, it will be hidden from the report. The following wildcard characters are supported: * – any number of any characters ? – any one character To add an entry, click on 'Add' and enter the text in the field that appears. To remove an entry, select it and click on 'Remove'. Entries with empty fields will be deleted automatically. You can edit the existing entries by double-clicking on it or by selecting it and clicking on 'Edit'. When you click on 'OK' or 'Apply', a warning table is updated to match the current filters. ### The 'Keyword Message Filtering' tab The tab contains a keyword editor, warnings with which will be hidden in reports. Keywords from this list are checked only by the data in the 'Message' column. The function can be helpful if you need to hide warnings from a specific function and class — you only need to specify them here.
{}
# Physical and chemical properties of alkene ## Physical properties of alkenes The name alkene or olefin coming from the ethylene was called oil-forming gas or olefiant gas. Alkenes have properties to forms an oily liquid when treated with chlorine or bromine. Alkenes or olefins containing two to four carbon atoms are gases, five to seventeen are liquid and eighteen on words solid at room temperature and they burn in air with a luminous flame. Physical properties of alkenes similar to those of alkanes since the alkenes have only weak Van der Waals attractive forces. ### Hydrogenation of alkenes The heat released for hydrogenation of alkenes to form alkane known as the heat of hydrogenation. CH3CH = CH2 → CH3CH2 – CH3 + ΔH When the heat of hydrogenation value decreases the stability of the alkenes increases because alkane is most stable than the respective alkenes. Thus heat of hydrogenation value compares the stability of the alkenes series. Alkenes or olefins ΔH CH2=CH2 -137.0 kJ mol-1 MeCH=CH2 -125.9 kJ mol-1 MeCH2CH=CH2 -126.8 kJ mol-1 cis-MeCH=CHMe -119.7 kJ mol-1 trans-MeCH=CHMe -119.7 kJ mol-1 Me2C=CH2 -188.8 kJ mol-1 Since the thermodynamic heat of hydrogenation is an exothermic reaction. Thus the numerically smaller value of ∆H, the more stable is the alkene. ### Stability of alkenes hyperconjugation Enthalpy of formation of alkenes not purely addictive properties. Thus the stability of alkenes also depends on steric effects and hyperconjugation. Since three n- butenes all give the n-butane on reduction and the order of stability of these alkenes trans-but-2-ene 〉cis-but-2-ene 〉 but-1-ene Thus this order of stability explained in terms of steric effect and hyperconjugation. In cis-but-2-ene, the two methyl groups in cis isomer being closer together than in the trans isomer. Thus it experiences greater steric repulsion and consequently, cis-isomer has greater strain than the trans-isomer. Thus steric repulsion destabilizes the cis-isomer. trans-but-2-ene 〉cis-but-2-ene On the other hand, hyperconjugation stabilizes the molecule. Among this three hydrocarbon, but-1-ene has less number of hyperconjugation structure. Thus but-1-ene less stable among these three alkenes. Since trans-but-2-ene is the most stable isomer, it follows that hyperconjugation has a greater stabilizing effect then steric repulsion a destabilizing effect. Problem Arrange the following alkenes in order of increasing stability. 1. Me2C=CH2 2. cis-MeCH=CHMe 3. trans-MeCH=CHMe 4. MeCH2CH=CH2 Solution MeCH2CH=CH2 < cis-MeCH=CHMe < trans-MeCH=CHMe < Me2C=CH2 In general, the order of stability of alkenes R2C=CR2>R2C=CHR>R2C=CH2~RCH=CHR>RCH=CH2>CH2=CH2 ### The chemical properties of alkenes Owing to the presence of a double bond, the alkenes undergo a large number of addition reactions but under special conditions, they also undergo substitution reactions. The high reactivity of this chemical bond due to the presence of two π- electrons. Thus when addition reaction occurs, the trigonal arrangement in the alkene changed to the tetrahedral arrangement like methane and a saturated compound produced. ### The combustion reaction of alkenes Alkenes are flammable substances they burn in air with a luminous smoky flame to produce carbon dioxide and water. 2CnH2n + 3H2O → 2nCO2 + 2nH2O + ΔH CH2=CH2 + 3O2 → 2CO2 + 2H2O + ΔH An organic reaction where two or more molecules combine to form the larger one is called addition reaction the product called an additive product. ### Catalytic hydrogenation of alkenes Alkenes readily hydrogenated under pressure in the presence of a catalyst. Finely divided platinum and palladium at room temperature, nickel on between 2000 C and 3000 C, Raney nickel at 2000 C use for this conversation. $CH_{3}CH=CH_{2}+H_{2}\xrightarrow[or\,&space;Ni/\Delta&space;]{Pt\,&space;or\,&space;Pd}CH_{3}CH_{2}CH_{3}$ #### Addition of halogens to alkenes Alkenes react with chlorine or bromine to form addition products. $CH_{2}=CH_{2}+Br_{2}\xrightarrow{CCl_{4}}BrCH_{2}-CH_{2}Br$ Halogen addition can take place either by a heterolytic (polar) or a free-radical mechanism. Halogen addition radially occurs in solution, in the absence of light or peroxides which catalyzed by inorganic halides. Aluminum chloride or by polar surfaces use as a catalyst. These facts lead to the conclusion that reaction occurs by a polar mechanism. But the free radical mechanism has generally accepted that the addition of halogen to alkenes in the absence of light is polar. Stewart showed that the addition of chlorine to ethylene is accelerated by light and this suggested the free radical mechanism. Ethylene adds hydrogen bromide to form ethyl bromide. CH2=CH2 + HBr → CH3⎯CH2Br The order of reactivity of the halogen acids HI>HBr>HCl>HF This is also the order of acid strength. The conditions for the addition are similar to those for halogens, only the addition of hydrogen fluoride occurs under pressure. In the case of unsymmetrical alkenes, it is possible for the addition of the halogen acid to take place in two different ways, Propane might add on hydrogen iodide to form propyl iodide or isopropyl iodide. CH3-CH=CH2 + HI → CH3-CH(H)- CH2(I) CH3-CH=CH2 + HI → CH3-CH(I)-CH2(H) Markovnikov studied many reactions of this kind, and as a result of his work, formulated the following rule. ### Markovnikov rule for alkenes The negative part of the addendum adds on to the carbon atom that is joined to the less number of hydrogen atoms. For halogen acids, the halogen atom is the negative part. Thus according to Markovnikov rule, when propene reacts with halogen acid it forms isopropyl halide. Markovnikov’s rule is empirical but may be explained theoretically on the basis that the addition occurs by a polar mechanism. The addition of halogen acid is an electrophilic reaction thus the proton adding fast, followed by halide ion. Also, the addition predominantly in trans position and this may explain in terms of the formation of a bridge carbonium ion. Since the methyl group has a +I effect, the π electrons are displaced towards the terminal carbon atom which, in consequence, acquires a negative charge. Thus, the proton added on to the carbon fastest from the methyl group, and the halide ion then adds to the carbonium ion. An alternative explanation for Markovnikov’s rule is in terms of the stabilities of carbonium ions. Represent as primary, secondary and tertiary carbonium ion. ### Chemical properties of alkenes Many compounds including alkenes contain one double bond that exists in the two forms which differ in most of their physical and chemical properties. Vant Hoff suggested, if we assume there is no free rotation about a double bond, two structural arrangements are possible for this molecule. In ethylene, we have used SP2 trigonal hybridization to describe the double bond. In this case, two SP2 electrons of ethylene to form one banana bond and the other two-electron to form a second banana bond.
{}
More circuit problems 1. Feb 17, 2006 Firestrider Ok i'm not sure if I did these right, and my teacher isn't exactly sure how it's done either. Here are the problems: 1. http://img153.imageshack.us/img153/7456/circuit10al.jpg [Broken] Find the current that flows through the $$3\Omega$$ resistor. 2. http://img153.imageshack.us/img153/9011/circuit22wq.jpg [Broken] What is the potential of point D relative to point C? This is what I did for #1: I solved the parallel branch into a series equivilent by finding the resistance equivilent of the 3, 4, and 5 $$\Omega$$ resistors and got $$(\frac{1}{3}+\frac{1}{4}+\frac{1}{5})^{-1}=1.27\Omega$$. Since all the resistors were in a series now I took the sum and got $$R_{eq}=1.27+2+ 1=4.27\Omega$$. To find the current throughout I did $$\frac{V}{R_{eq}}=\frac{12}{4.27}=2.8A$$. Now I had to find the current in the $$3\Omega$$ resistor so I think I had to find the voltage at the point before it went into the parallel circuit which came out to be $$V_{3\Omega}=\xi-(V_{d2\Omega}+V_{d1\Omega})=12-((2.8*2)+(2.8*1))=3.6V$$ and then divide that by 3 to get 1.2A For #2: I had the same problem on this one as the last one. Basically on how to find the current in a parallel circuit. I know Kirchhoff's junction rule in that the sum of all the currents going in, must be the sum of all the currents coming out, and I know that the voltage stays the same and current changes in parallel circuits. I've heard that the current is split evenly between each branch, is proportional to each branches resistance, and/or it is calculated by the voltage divided by the resistance at that point. I'm not sure which, if any, to use. My answer, which stays consistant with #1 of 8V is probably not correct since that would mean more voltage than the terminal. But if the split between the two branches it stays consistant with Kirchhoff's rules and not #1 My teacher didn't go over anything about RC circuits, just that the resistance is calculated in the direct opposite manner of resistors. I think RC circuits might be on the AP exam, and my book doesn't have any RC circuit problems. Anyone know where I can find some? Also does anyone know what AP Physics B is in college? Last edited by a moderator: May 2, 2017 2. Feb 17, 2006 Firestrider 3. Feb 18, 2006 Integral Staff Emeritus Your work on the first part is correct. If you repeat the methodology on the 2nd you should get the correct result. As you are aware your answer for the second part is not correct. In the second problem since the resistance of each branch is the same, 6 ohms, the current through each branch will be the same. Remember the voltage drop across a single resistor is determined by the resistance and the current. 4. Feb 18, 2006 ehild Your solution for the first problrem is correct. Just apply the same method for the second one. Both 4 ohm resistors are connected in series, the voltage accross them is known, so you get the current. Multiply it by 4 ohm, you get the voltage across the resistor CD (3 V) The voltage is obtained as potential difference. VDC = UD-UC. You need the potential of D (UD) with respect to UC. The magnitude is 3 V, you have to decide the sign. A is connected to the positive terminal of the battery, so A is more positive than C. The current flows in the direction A->D->C. The current accross a resistor flows along decreasing potential, so D is more positive than D, that is the potential of D with respect to C is 3 V. ehild Sorry, I misread the 2 ohm resistor... so both the voltage and potential are 4 V. Last edited: Feb 18, 2006 5. Feb 18, 2006 phucnv87 http://photo-origin.tickle.com/image/69/7/5/O/69751595O802495255.jpg [Broken] Last edited by a moderator: May 2, 2017
{}
## Ideals of the ring of endomorphisms of a vector space Posted: October 5, 2011 in Noncommutative Ring Theory Notes, Ring of Endomorphisms Tags: , , , , Notation. Throughout this post we will assume that $k$ is a field, $V$ is a $k$-vector space, $E=\text{End}_k(V)$ and $\mathfrak{I} = \{f \in E : \ \text{rank}(f) < \infty \}.$ Obviously $\mathfrak{I}$ is a two-sided ideal of $E.$ If $\dim_k V = n < \infty,$ then $E \cong M_n(k),$ the ring of $n \times n$ matrices with entries in $k,$ and thus $E$ is a simple ring, i.e. the only two-sided ideals of $E$ are the trivial ones: $(0)$ and $E.$ But what if $\dim_k V = \infty ?$  What can we say about the two-sided ideals of $E$ if $\dim_k V = \infty ?$ Theorem 1. If $\dim_k V$ is countably infinite, then $\mathfrak{I}$ is the only non-trivial two-sided ideal of $E.$ Proof. Let $J$ be a two-sided ideal of $E$ and consider two cases. Case 1. $J \not \subseteq \mathfrak{I}.$ So there exists $f \in J$ such that $\text{rank}(f)=\infty.$ Let $\{v_1, v_2, \ldots \}$ be a basis for $V$ and let $W$ be a subspace of $V$ such that $V = \ker f \oplus W.$ Note that $W$ is also countably infinite dimensional because $f(V)=f(W).$ Let $\{w_1,w_2, \ldots \}$ be a basis for $W.$ Since $\ker f \cap W = (0),$ the elements $f(w_1), f(w_2), \ldots$ are $k$-linearly independent and so we can choose $g \in E$ such that $gf(w_i)=v_i,$ for all $i.$ Now let $h \in E$ be such that $h(v_i)=w_i,$ for all $i.$ Then $1_E=gfh \in J$ and so $J=E.$ Case 2. $(0) \neq J \subseteq \mathfrak{I}.$ Choose $0 \neq f \in J$ and suppose that $\text{rank}(f)=n \geq 1.$ Let $\{v_1, \ldots , v_n \}$ be a basis for $f(V)$ and extend it to a basis $\{v_1, \ldots , v_n, \ldots \}$ for $V.$ Since $f \neq 0,$ there exists $s \geq 1$ such that $f(v_s) \neq 0.$ Let $f(v_s) = b_1v_1 + \ldots + b_nv_n$ and fix an $1 \leq r \leq n$ such that $b_r \neq 0.$ Now let $g \in \mathfrak{I}$ and suppose that $\text{rank}(g)=m.$ Let $\{w_1, \ldots , w_m \}$ be a basis for $g(V)$ and for every $i \geq 1$ put $g(v_i)=\sum_{j=1}^m a_{ij}w_j.$ For every $1 \leq j \leq m$ define $\mu_j, \eta_j \in E$ as follows: $\mu_j(v_r)=w_j$ and $\mu_j(v_i)=0$ for all $i \neq r,$ and $\eta_j(v_i)=b_r^{-1}a_{ij}v_s$ for all $i.$ See that $g=\sum_{j=1}^m \mu_j f \eta_j \in J$ and so $J=\mathfrak{I}. \ \Box$ Exercise. It should be easy now to guess what the ideals of $E$ are if $\dim_k V$ is uncountable. Prove your guess! Definition. Let $n \geq 1$ be an integer. A ring with unity $R$ is called $n$-simple if for every $0 \neq a \in R,$ there exist $b_i, c_i \in R$ such that $\sum_{i=1}^n b_iac_i=1.$ Remark 1. Every $n$-simple ring is simple. To see this, let $J \neq (0)$ be a two-sided ideal of $R$ and let $0 \neq a \in J.$ Then, by definition, there exist $b_i,c_i \in R$ such that $\sum_{i=1}^n b_iac_i=1.$ But, since $J$ is a two-sided ideal of $R,$ we have $b_iac_i \in J,$ for all $i,$ and so $1 \in J.$ It is not true however that every simple ring is $n$-simple for some $n \geq 1.$ For example, it can be shown that the first Weyl algebra $A_1(k)$ is not $n$-simple for any $n \geq 1.$ Theorem 2. If $\dim_k V = n < \infty,$ then $E$ is $n$-simple. If $\dim_k V$ is countably infinite, then $E/\mathfrak{I}$ is $1$-simple. Proof. If $\dim_k V = n,$ then $E \cong M_n(k)$ and so we only need to show that $M_n(k)$ is $n$-simple. So let $0 \neq a =[a_{ij}] \in M_n(k)$ and suppose that $\{e_{ij}: \ 1 \leq i.j \leq n \}$ is the standard basis for $M_n(k).$ Since $a \neq 0,$ there exists $1 \leq r,s \leq n$ such that $a_{rs} \neq 0.$ Using $a = \sum_{i,j}a_{ij}e_{ij}$ it is easy to see that $\sum_{i=1}^n a_{rs}^{-1}e_{ir}ae_{si}=1,$ where $1$ on the right-hand side is the identity matrix.  This proves that $E$ is $n$-simple. If $\dim_k V$ is countably infinite, then, as we proved in Theorem 1, for every $f \notin \mathfrak{I}$ there exist $g,h \in E$ such that $gfh=1_E.$ That means $E/\mathfrak{I}$ is $1$-simple. $\Box$ Remark 2. An $n$-simple ring is not necessarily artinian. For example, if $\dim_k V$ is countably infinite, then the ring $E/\mathfrak{I}$ is $1$-simple but not artinian. ## Primitive rings; definition & examples Posted: December 17, 2009 in Noncommutative Ring Theory Notes, Primitive Rings Tags: , , , Let $R$ be a ring and let $M$ be a left $R$-module. Recall the following definitions: 1) $M$ is called faithful if $rM=(0)$ implies $r=0$ for any $r \in R.$ In other words, $M$ is called faithful if $\text{ann}_RM = \{r \in R : \ rM=(0) \}=(0).$ 2) $M$ is called simple if $(0)$ and $M$ are the only left $R$ submodules of $M.$ Faithful and simple right $R$-modules are defined analogously. Definition. A ring $R$ is called left (resp., right) primitive if there exists a left (resp., right) $R$-module $M$ which is both faithful and simple. Remark. We will show later that left and right primitivity are not equivalent.  From now on, I will only consider left primitive rings. If a statement is true for left but not for right, I will mention that. Example 1. If $R$ is a ring and $M$ is a left simple $R$-module, then $R_1=R/\text{ann}_RM$ is a left primitive ring.  This is clear because $M$ would be a faithful simple left $R_1$-module. Example 2. Every simple ring is left primitive. That’s because we can choose a maximal left ideal $\mathbf{m}$ of $R$ and then $M=R/\mathbf{m}$ would be a faithful simple left $R$-module. The reason that $M$ is faithful is that $\text{ann}_R M$ is a two-sided ideal contained in $\mathbf{m}$ and therefore $\text{ann}_R M = (0),$ because $R$ is simple. One special case of this example is $M_n(D),$ the ring of $n \times n$ matrices with entries from a division ring $D.$ If $V$ is an infinite dimensional vector space over a field $F$, then $\text{End}_F V$ is an example of a left primitive ring which is not simple [see Example 4 and this post]. Example 3. If $R$ is a left primitive ring and $0 \neq e \in R$ an idempotent, then $R_1=eRe$ is left primitive: let $M$ be a faithful simple $R$-module. The claim is that $M_1=eM$ is a faithful simple left $R_1$-module. Note that $M_1 \neq (0)$ because $e \neq 0$ and $M$ is faithful. Clearly $M_1$ is a left $R_1$-module.  To see why it’s faithful, let $r_1 = ere \in R_1$ with $r_1M_1=(0).$ Then $(0)=ere^2M=ereM=r_1M.$ So $r_1=0,$ because $M$ is faithful. To prove that $M_1$ is a simple $R_1$-module let $0 \neq x_1 \in M_1.$ We need to show that $R_1x_1=M_1.$ Well, since $x_1 = ex,$ for some $x \in M,$ we have $ex_1=ex=x_1.$ Thus $R_1x_1=eRex_1=eRx_1=eM=M_1.$ Example 4. Let $D$ be a division ring and let $M$ be a right vector space over $D.$ Then $R=\text{End}_D M$ is a left primitive ring. Here is why: $M$ is a left $R$-module if we define $fx = f(x),$ for all $f \in R$ and $x \in M.$ It is clear that $M$ is faithful as a left $R$-module. To see why it is simple, let $x,y \in M$ with $x \neq 0.$ Let $B=\{x_i: \ i \in I \}$ be a basis for $M$ over $D$ such that $x=x_k \in B,$ for some $k \in I.$ Define $f \in R$ by $f(\sum_{i \in I}x_id_i) = yd_k.$ Then $f(x)=f(x_k)=y.$ So, we’ve proved that $Rx = M,$ which shows that $M$ is a simple $R$-module. If $\dim_D M = n < \infty,$ then $R \cong M_n(D),$ which we already showed its primitivity in Example 2. Note that if $M$ was a “left” vector space over $D$ with $\dim_D M = n < \infty,$ then $R$ would be isomorphic to the ring $M_n(D^{op})$ rather than the ring $M_n(D).$
{}
05-146 Teunis C. Dorlas, Philippe A. Martin and Joseph V. Pule Long Cycles in a Perturbed Mean Field Model of a Boson Gas (330K, Postscript) Apr 25, 05 Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers Abstract. In this paper we give a precise mathematical formulation of the relation between Bose condensation and long cycles and prove its validity for the perturbed mean field model of a Bose gas. We decompose the total density $\rho=\rho_{{\rm short}}+\rho_{{\rm long}}$ into the number density of particles belonging to cycles of finite length ($\rho_{{\rm short}}$) and to infinitely long cycles ($\rho_{{\rm long}}$) in the thermodynamic limit. For this model we prove that when there is Bose condensation, $\rho_{{\rm long}}$ is different from zero and identical to the condensate density. This is achieved through an application of the theory of large deviations. We discuss the possible equivalence of $\rho_{{\rm long}}\neq 0$ with off-diagonal long range order and winding paths that occur in the path integral representation of the Bose gas. Files: 05-146.src( 05-146.keywords , Cycles.pdf.mm )
{}
# Prove $3^n = \sum_{0 \leq i \leq j \leq n}$ $n \choose i$ $i \choose j$ How to prove $3^n = \sum_{0 \leq j \leq i \leq n}$ $n \choose i$ $i \choose j$ using $3^n = \sum_{0 \leq i \leq n} 2^i$ $n \choose i$ - $2 = 1+1$. :-) Also, shouldn't it be $j \leq i$? –  WimC Nov 19 '12 at 16:51 @WimC yes typo, thanks –  xiamx Nov 19 '12 at 16:52 $$\sum_{0 \leq j \leq i \leq n} {n \choose i} {i \choose j}=\sum_{0 \le i \le n} {n \choose i} \sum_{0 \le j \le i}{i \choose j}=\sum_{0 \le i \le n} {n \choose i} 2^i=3^n.$$ Count cardinality of $S = \{(A,B):B \subseteq A \subseteq \left\{1,2,\dots,n\right\}\}$ in two different ways: Way 1. Each element of $\{1,2,\dots,n\}$ can either be in $A$ and $B$, only in $A$, or in none of $A$ and $B$, so $|S|=3^n$. Way 2. If $|A|=i$ and $|B|=j$, then there are $n \choose i$ options for $A$ and $i \choose j$ options for $B$, therefore $|S|=\sum_{j \leq i} {n \choose i} {i \choose j}$.
{}
# 2.8. Rydberg Atom Ion Interaction¶ Here we look at the interacton between a single ion and a Rydberg atom. More details regarding the theory can be found in A. Duspayev et al., Phys. Rev. Research 3, 023114 (2021) or M. Deiß et al., Atoms 9, 2 (2021). The inernuclear axis is aligned with the z-direction and the interaction is expanded in multipolar terms. With this choice of coordinate system, the interaction potential is rotationally symmetric around the quantization axis and only states of same $$m_J$$ couple with each other, reducing the number of basis states. We will reproduce here Fig. 2 (a) and (b) from Phys. Rev. Research 3, 023114 (2021), showing the influence of higher multipole order terms ## 2.8.1. Import the Library¶ # We call an IPython magic function to make the output of plotting commands displayed inline. %matplotlib inline # Arrays import numpy as np # Plotting import matplotlib.pyplot as plt # Operating system interfaces import os # pairinteraction :-) from pairinteraction import pireal as pi # Create cache for matrix elements if not os.path.exists("./cache"): os.makedirs("./cache") cache = pi.MatrixElementCache("./cache") ## 2.8.2. Application: Potential Curves¶ The SystemOne class defines the Rydberg atom and the effect of the electric field of the ion can be included into the calculations. The charge of the ion in units of the elementary charge can be passed by SystemOne.setIonCharge(charge) and the distance between the ion and the Rydberg core with SystemOne.setRydIonDistance(distance) in units of micrometer. The orientatin is fixed such that the internuclear axis points along the z-axis. As an example we show how to calculate the energy of the Rydberg state for different internuclear distances and including a different number of multipole orders. Choosing a maximum multipole order of one is equivalent to calculating a StarkMap in a homogenious electric field. # Define the Rydberg state for which the interaction with the ion should be calculated state = pi.StateOne("Rb", 45, 1, 1.5, 0.5) # Setup system, considering states only with similar energy and principle quantum number. # (if a high accuracy is required, the energy and principle quantum number ranges must be increased) LowN = 42 HighN = 48 LowEnergy = -100 HighEnergy = 100 system = pi.SystemOne(state.getSpecies(), cache) system.restrictEnergy(state.getEnergy() + LowEnergy, state.getEnergy() + HighEnergy) system.restrictN(LowN, HighN) # Since the ion and Rydberg atom are placed on the z-axis, the magnetic momentum is conserved system.restrictM(0.5, 0.5) # Define the charge of the ion in units of the elementary charge system.setIonCharge(1) # Define the maximum considered order of the multipole expansion. # 1: monopole-dipole, 2: monopole-quadrupole, ... system.setRydIonOrder(1) # Loop over different distances to the ion array_distances = np.linspace(1.5, 1.1, 100) # um array_eigenvalues = [] array_overlaps = [] for distance in array_distances: # Set the ion Rydberg-atom distance in units of um system.setRydIonDistance(distance) # Diagonalize the system system.diagonalize() # Store the eigenenergies array_eigenvalues.append(system.getHamiltonian().diagonal()) # Store the overlap of the eigenstates with the defined state array_overlaps.append(system.getOverlap(state)) array_eigenvalues = np.ravel(array_eigenvalues) array_overlaps = np.ravel(array_overlaps) array_distances = np.repeat(array_distances, system.getNumBasisvectors()) array_eigenvalues = ( array_eigenvalues / 29.9792458 ) # Convert GHz into inverse cm wavenumber # Plot the interaction potential, the color code visualizes overlap of eigenstates with defined state plt.scatter(array_distances, array_eigenvalues, 8, array_overlaps) plt.xlabel("Distance (um)") plt.ylabel("Energy (1/cm)") plt.colorbar(label=f"Overlap with {state}") plt.title("Up to 1st order in the multipole expansion") plt.grid() plt.ylim(-61.5, -61.2) plt.show() # Calculate the same but including orders up to 6 system.setRydIonOrder(6) # Loop over different distances to the ion array_distances = np.linspace(1.5, 1.1, 100) # um array_eigenvalues = [] array_overlaps = [] for distance in array_distances: # Set the ion Rydberg-atom distance in units of um system.setRydIonDistance(distance) # Diagonalize the system system.diagonalize() # Store the eigenenergies array_eigenvalues.append(system.getHamiltonian().diagonal()) # Store the overlap of the eigenstates with the defined state array_overlaps.append(system.getOverlap(state)) array_eigenvalues = np.ravel(array_eigenvalues) array_overlaps = np.ravel(array_overlaps) array_distances = np.repeat(array_distances, system.getNumBasisvectors()) array_eigenvalues = ( array_eigenvalues / 29.9792458 ) # Convert GHz into inverse cm wavenumber # Plot the interaction potential, the color code visualizes overlap of eigenstates with defined state plt.scatter(array_distances, array_eigenvalues, 8, array_overlaps) plt.xlabel("Distance (um)") plt.ylabel("Energy (1/cm)") plt.colorbar(label=f"Overlap with {state}") plt.title("Up to 6th order in the multipole expansion") plt.grid() plt.ylim(-61.5, -61.2);
{}
# Difference between revisions of "Fast Fourier Transform Library & Arduino" ## Overview ### Introduction As the name suggests the Fast Fourier Transform Library enables for the timely computation of a signal's discrete Fourier transform. Instructions on how to download the latest release can be found here. A fast Fourier transform (fFt) would be of interest to any wishing to take a signal or data set from the time domain to the frequency domain. ## Materials & Prerequisites ### Materials All items one needs to utilize an fFT with an Arduino are: *Computer *Arduino *USB connector ## Process First, let's begin with a discussion of a general Fourier Transform (FT) and then we will address the Fast Fourier Transform (FFT). The Fourier transform is a mathematical function that decomposes a waveform, which is a function of time, into the frequencies that make it up. The result produced by the Fourier transform is a complex valued function of frequency. The absolute value of the Fourier transform represents the frequency value present in the original function and its complex argument represents the phase offset of the basic sinusoidal in that frequency. The Fourier transform is also called a generalization of the Fourier series. This term can also be applied to both the frequency domain representation and the mathematical function used. The Fourier transform helps in extending the Fourier series to non-periodic functions, which allows viewing any function as a sum of simple sinusoids. The definition of which is provided below. Note that we use the discrete time definition of the FT, or a Discrete Fourier Transform (DFT), and not the continuous time definition, the main difference between the two being a summation vs an integral. We make this choice because inputs into computers are discrete data points and not continuous so we can not use an integral, and by extension, we can't use the continuous time definition of the FT. The decomposition of functions into their respective frequencies is a very powerful and useful tool, however, an FT requires massive amounts of computational work. For those of you in computer science, an FT would take O(n^2) time. So although we want to use this tool it is computationally expensive. Or at least this was the case until J.W Cooley and John Tukey came on the scene. They came up with the aptly named Cooley–Tukey FFT algorithm which reduced the time cost on a DFT from O(n^2) to O(nlogn). It recursively breaks down the DFT into smaller DFTs and turns the summation into a dynamic programming problem. So we save on time but we increase our space complexity dramatically, however, due to the continued improvement in transistor technology, modern day computing, for all practical purposes, doesn't care about space complexity until it has to. An example of a field that has to would be computational biology, due to the vast number of different genes in existence that all need to be tested. But that's beside the point, let's continue our discussion of an FFT. The actual algorithm begins by splitting the matrix into two parts, one with all the even indexed elements, the other part with all the odd indexed elements. We continue spliting the matrices down in the same manner until we can perform a DFT on a manageable matrix. Another way to implement the split, rather than by the even-odd index convention, would be through a reversing the bit value of the array entry's index. You make the choice only on the language you program in; so choose the option that takes up less time to implement for the language. Coding it will look like: function cooley_tukey(x) N = length(x) if (N > 2) x_odd = cooley_tukey(x[1:2:N]) x_even = cooley_tukey(x[2:2:N]) else x_odd = x[1] x_even = x[2] end n = 0:N-1 half = div(N,2) factor = exp.(-2im*pi*n/N) return vcat(x_odd .+ x_even .* factor[1:half], x_odd .- x_even .* factor[1:half]) end ## Authors • Jordan Gewirtz • Nish Chakraburtty • Chanel Lynn
{}
# 8. (10 marks) Let Rbe the region that lies above the the cone = Vz? +4, outside the sphere 12 ###### Question: 8. (10 marks) Let Rbe the region that lies above the the cone = Vz? +4, outside the sphere 12 + y? 4 22 =] and inside the sphere _2 +y? +22 = 4 Use spherical coordinates to compute IIf Vz?+y? + zV. #### Similar Solved Questions ##### 12. The functions fx) and g(x) are graphed in the xy-plane. The graph ofy-f(r) is equivalent to the graph of y-g(x) stretched by a factor of 2 in the xx-direction. If g(x)-xex , which of the following correctly defines fx)? fx)-2x e2x B. flx)-Zx ex C. fx)-(x+2) ex+2 D fx)-0.Sx eo.sx 12. The functions fx) and g(x) are graphed in the xy-plane. The graph ofy-f(r) is equivalent to the graph of y-g(x) stretched by a factor of 2 in the xx-direction. If g(x)-xex , which of the following correctly defines fx)? fx)-2x e2x B. flx)-Zx ex C. fx)-(x+2) ex+2 D fx)-0.Sx eo.sx... ##### [-/1 Points]DETAILSSCALC8 3.2.504.XP.MY NOTESASK YOUR TEACHERDoes the function satisly the hypotheses the Mean Value Theorem the given interval? [0, 2] Yes; does not matter f Is continuous dumerentiable every function satistes tne Mean Value TheoremThere not enough Informatlon verify thls function satisfies the Mean Value Theorem:No, fi5 continuous [O, 2] but not differentiable on (0, 2). Yes; / IS continuous on [0, 2] and differentiable on (0_ 2) since polynomials are continuous and differentia [-/1 Points] DETAILS SCALC8 3.2.504.XP. MY NOTES ASK YOUR TEACHER Does the function satisly the hypotheses the Mean Value Theorem the given interval? [0, 2] Yes; does not matter f Is continuous dumerentiable every function satistes tne Mean Value Theorem There not enough Informatlon verify thls func... ##### Tbe graph of f(r) = given belox; Vr+2If you USC numcnc integration determinedt with8 , which of thc following statementstnic?Of all of the nthods We have learedpprUxumation of this integral would b Livenmidpoint method Mecub)L < Ra=c) O L> [" V+z R > Tbe graph of f(r) = given belox; Vr+2 If you USC numcnc integration determine dt with 8 , which of thc following statements tnic? Of all of the nthods We have leared pprUxumation of this integral would b Liven midpoint method Mecu b)L < Ra= c) O L> [" V+z R >... ##### A8p8 04 !or2~GJTCcUgrnc: gph gbowng rale %R tixe erd ccrzejtajon %B titue fu" t: raction Wuztrre frcun ijtial -izjue urt equilikiurn. {tt? G3p18} A8p8 04 !or 2~GJ T CcUgrnc: gph gbowng rale %R tixe erd ccrzejtajon %B titue fu" t: raction Wuztrre frcun ijtial -izjue urt equilikiurn. {tt? G3p18}... ##### (10 pts) List the possible states of total angular momentum (both the total angular momentum, and the possible values of the total angular momentum projection along ah axis) that result from adding the angular Inomentum to- gether of constituents &s described below:5 pts) "spin-polarized" hydrogen atom has the spin of the electron and proton parallel to make a constituent spin-1 object. Now list the possible angular momentum slates Ghal result from adding together the angular momen (10 pts) List the possible states of total angular momentum (both the total angular momentum, and the possible values of the total angular momentum projection along ah axis) that result from adding the angular Inomentum to- gether of constituents &s described below: 5 pts) "spin-polarized&q... ##### When the following reaction is balanced under basic conditions, what is the ratio of the coefficients... When the following reaction is balanced under basic conditions, what is the ratio of the coefficients of Mn(OH)2(s) to MnO4--(aq)? Mn(OH)2(s) + MnO4 (aq) + MnO42-(aq) (A) 3:1 (B) 1:3 (C) 1:4 (D) 1:5 What is the standard reduction potential for the reduction of permanganate ion to managanese dioxide ... ##### Hnos Z,s04 0-N-O+ 2Hs04, 4zot CO_Me HNo COz Me 4fp4 Malngl Dentnat NOz 044b-3-N kolenteatc81,3,4 nitration of thc arututic ring? What thc function ofthe sulfuric acid in the-methyl benzoate? position whcn adding Why dozs thc nitro group altach thc mctbenzyl - chloride (the purpose of this is s0 that Draw the structures ofphenyl chloride nnt piclx Tcler somcthing othcr than the simple benzene how the "benz why benzoate his the structure does) henccEcthyl banzoate can be forcd from the ester Hnos Z,s04 0-N-O+ 2Hs04, 4zot CO_Me HNo COz Me 4fp4 Malngl Dentnat NOz 044b-3-N kolenteatc 81,3,4 nitration of thc arututic ring? What thc function ofthe sulfuric acid in the- methyl benzoate? position whcn adding Why dozs thc nitro group altach thc mct benzyl - chloride (the purpose of this is s0 ... ##### When the length of each edge of a cube is increased by $1 \mathrm{cm},$ the volume is increased by $61 \mathrm{cm}^{3} .$ What is the length of each edge of the original cube? When the length of each edge of a cube is increased by $1 \mathrm{cm},$ the volume is increased by $61 \mathrm{cm}^{3} .$ What is the length of each edge of the original cube?... ##### MetalMetaloid Non-metalWhich= 0ll7e following compounds cannot undergo 542reaction?(Ch3iZCBrChJchZcHArICH3)3CCHZCICHBC CH3zcHZCHZCIAandioUnansw"ered8 SaveWalathepproduct ofthe reaction Metal Metaloid Non-metal Which= 0ll7e following compounds cannot undergo 542reaction? (Ch3iZCBr ChJchZcHAr ICH3)3CCHZCI CHBC CH3zcHZCHZCI Aandio Unansw"ered 8 Save Walathepproduct ofthe reaction... ##### Maximizing Profit A community bird-watching society makes and sells simple bird feeders to raise money for its conservation activities. The materials for each feeder cost $\$ 6$and the society sells an average of 20 per week at a price of$\$10$ each. The society has been considering raising the price, so it conducts a survey and finds that for every dollar increase, it will lose 2 sales per week. (a) Find a function that models weekly profit in terms of price per feeder. (b) What price shou Maximizing Profit A community bird-watching society makes and sells simple bird feeders to raise money for its conservation activities. The materials for each feeder cost $\$ 6$and the society sells an average of 20 per week at a price of$\$10$ each. The society has been considering raising the p... ##### Amy Austin established an insurance agency on March 1 of the current year and completed the... Amy Austin established an insurance agency on March 1 of the current year and completed the following transactions during March Opened a business bank Ut with a deposto SS0,000 change for man shock . Purchased supples on account, $4,000 c ad creditors on COURT,$2,300. d. Received cash from fees ear... ##### How do you find the instantaneous rate of change of the function y= x^2-4x when x=1? How do you find the instantaneous rate of change of the function y= x^2-4x when x=1?... ##### 21. [-/1 Points]DETAILS coordinates to polar coordinates with 7>0 and 0 < 0 < Zn. Convert the rectangular [32,V 32(0, 0) = 21. [-/1 Points] DETAILS coordinates to polar coordinates with 7>0 and 0 < 0 < Zn. Convert the rectangular [32,V 32 (0, 0) =... ##### Suppose that the budget constraint is given as: PxX + PyY = M and the formulation... Suppose that the budget constraint is given as: PxX + PyY = M and the formulation of a utility function is given as: Answer for following questions and show all your calculation/ proof. a. Derive the formula of income-consumption curve and draw its graph. b. Derive the demand function of good X and... ##### Which type of annuity does the monthly payments of a rental agreement form? and With respect... Which type of annuity does the monthly payments of a rental agreement form? and With respect to when the balance will be paid off, does it matter when, in the term, a lump sum payment is made on the principal of a mortgage?... ##### How do you solve 5( z + 4) + 1= 5? How do you solve 5( z + 4) + 1= 5?... ##### Problem 6 10 points Consider the following simple lottery game. There are ten balls total of... Problem 6 10 points Consider the following simple lottery game. There are ten balls total of which five are red and numbered 1,2,3,4,5. The remaining five are blue and also numbered 1,2,3,4,5. Three balls are simultaneously pulled out from a hat, and depending on what they are, you might win a prize... ##### 1. The wave-functions of the states [4) and (0) are given by y(x) and Q(x), respectively.... 1. The wave-functions of the states [4) and (0) are given by y(x) and Q(x), respectively. Derive the expression for the inner product (14) in terms of the wave- functions Q(x) and (x). What is the physical meaning of y(x) and (x)/2? 2. Fig. 1 shows a sketch of y(x). Sketch y(x) such that the states ... ##### What is the Hubble classification of an elliptical galaxy that appears spherical in shape? What is the Hubble classification of an elliptical galaxy that appears spherical in shape?... ##### 2 Points possible: Total attempts: 3To enter exponents use the caret: (SHIFT-6)Use the PREVIEW button t0 see what you ve entered.Enter c'Preview3 Points possible: Total attempts:Sometimes you'II need t0 enter fractional exponent: Be sure tO enter the fraction in parentheses so the grouping done properly:{ should be entered as a (p/q)Enter 65Preview4 Points possible: Total attempts: 3 To enter nth roots like Wz, use root(n)x)EnterPreview5 Points possible: Total attempts:To enter pol 2 Points possible: Total attempts: 3 To enter exponents use the caret: (SHIFT-6) Use the PREVIEW button t0 see what you ve entered. Enter c' Preview 3 Points possible: Total attempts: Sometimes you'II need t0 enter fractional exponent: Be sure tO enter the fraction in parentheses so the ... ##### L. If the following DNA molecule were copied, what would the bases of the second DNA strand be Write them in below .T G G G G A C G T A AM. How is DNA organized starting with DNA molecule alone up until you achieve chromosome? List four levels of DNA organization and what makes their structure L. If the following DNA molecule were copied, what would the bases of the second DNA strand be Write them in below . T G G G G A C G T A A M. How is DNA organized starting with DNA molecule alone up until you achieve chromosome? List four levels of DNA organization and what makes their struct... ##### You invested $9000 between two accounts paying 5% and 6% annual interest. if the total interest earned for the year was$500, how much was invested at each rate? How much was invested at 5% and at 6%? You invested $9000 between two accounts paying 5% and 6% annual interest. if the total interest earned for the year was$500, how much was invested at each rate? How much was invested at 5% and at 6%?... The sales mix percentages for Novotna's Boston and Seattle Divisions are 70% and 30% The contribution margin ratios are: Boston (40%) and Seattie (30). Fixed costs are $2.220,000. What is Novotna's break-even point in dollars? a.$777,000 b. $6,000,0o0 C.$6,342.856 d. $6,727,272 11. 12 A co... 5 answers ##### The solution(s) of the system X-y+z=1 y-2=2 2y - 2z = 1 is (are):X=l,y=l,z=-1The system has no solutionNone of these0 x=3,y=1,z=-1X=2,y=1,z=-1 The solution(s) of the system X-y+z=1 y-2=2 2y - 2z = 1 is (are): X=l,y=l,z=-1 The system has no solution None of these 0 x=3,y=1,z=-1 X=2,y=1,z=-1... 5 answers ##### Graph the following function: P(r) = (r 3)2( - I)?, Show how Vou fouud Llie eud behavior of Label the y-intercept aud the I-intercepts. Graph the following funetion: p(z) ~6 + 5) '(r + 4). Show liow" VOu fouud the end behavior of p Label the V-intercept and the f-intercepts.Graph the following function:p(r) =r-4r? 41 + [6Do this by factoring Show" how" Vou found the end bchavior of p Label the y-intercept ad the p-intercepts_10. Graph the following funetion:pr) =+22 _f+6Do th Graph the following function: P(r) = (r 3)2( - I)?, Show how Vou fouud Llie eud behavior of Label the y-intercept aud the I-intercepts. Graph the following funetion: p(z) ~6 + 5) '(r + 4). Show liow" VOu fouud the end behavior of p Label the V-intercept and the f-intercepts. Graph the foll... 5 answers ##### Their relationships identify Please draw the standard Fisher projections of the two sugars below and as the same molecules, enantiomers, or diastereomers OH CHzOH OHC CH HO ~CHzOH HO_ ~H H= CHO OH their relationships identify Please draw the standard Fisher projections of the two sugars below and as the same molecules, enantiomers, or diastereomers OH CHzOH OHC CH HO ~CHzOH HO_ ~H H= CHO OH... 1 answer ##### Why has nursing made policy and political competence such a strong part of the nursing curriculum... Why has nursing made policy and political competence such a strong part of the nursing curriculum and role development?... 5 answers ##### Problem 2 An orbit of & satellite around planet L an ellipse, with the planet at one focus of thi; ellipse, The distance of the satellite from this planet varies from 300, 000 km to 500, 000 km, attained when the satellite Is at each of the two vertices. Find the equation of this ellipse, If its center at the origin,and the vertices are on the x-axis- Aissume all units are in 100, 000 km:hundred km3 hundred km Problem 2 An orbit of & satellite around planet L an ellipse, with the planet at one focus of thi; ellipse, The distance of the satellite from this planet varies from 300, 000 km to 500, 000 km, attained when the satellite Is at each of the two vertices. Find the equation of this ellipse, If it... 5 answers ##### Two semicircular wires centered at the origin with radii R1 =0.75 m and Rz =0.10 m are joined with straight wire segments to form non-planar loop as shown in the figure. The planes of the semicircles are perpendicular The loop wire has resistance per unit length of 10 snlm. ^ uniform, time-dependent magnetic field B(t) = Bote with Bo = 6.00*10-2 T and a =1.5$ applied= for t 2 0. The magnetic field extends parallel to the y-z Rz plane and makes an angle 0 with the axis as shown in the side view Two semicircular wires centered at the origin with radii R1 =0.75 m and Rz =0.10 m are joined with straight wire segments to form non-planar loop as shown in the figure. The planes of the semicircles are perpendicular The loop wire has resistance per unit length of 10 snlm. ^ uniform, time-dependent...
{}
# zbMATH — the first resource for mathematics Analysis of autonomous Lotka-Volterra competition systems with random perturbation. (English) Zbl 1258.34099 A multi-species Lotka-Volterra competition system with $$n$$ interacting components is considered. Conditions for stability in time average, existence of stationary distribution, as well as extinction, are derived. ##### MSC: 34C60 Qualitative investigation and simulation of ordinary differential equation models 34D05 Asymptotic properties of solutions to ordinary differential equations 34F05 Ordinary differential equations and systems with randomness 92D25 Population dynamics (general) Full Text: ##### References: [1] Bahar, A.; Mao, X.R., Stochastic delay Lotka-Volterra model, J. math. anal. appl., 292, 364-380, (2004) · Zbl 1043.92034 [2] Bao, J.H.; Mao, X.R.; Yin, G.; Yuan, C.G., Competitive Lotka-Volterra population dynamics with jumps, Nonlinear anal., 74, 6601-6616, (2011) · Zbl 1228.93112 [3] Berman, A.; Plemmons, R.J., Nonnegative matrices in the mathematical sciences, (1979), Academic Press New York · Zbl 0484.15016 [4] Chen, L.S.; Chen, J., Nonlinear biological dynamical system, (1993), Science Press Beijing [5] Friedman, H.I., Deterministic mathematical models in population ecology, (1998), Dekker New York [6] Gard, T.C., Introduction to stochastic differential equations, (1988), Dekker New York · Zbl 0682.92018 [7] Goh, B.S., Global stability in many species systems, Amer. nat., 111, 135-143, (1997) [8] Golpalsamy, K., Globally asymptotic stability in a periodic Lotka-Volterra system, J. austral. math. soc. ser. B, 24, 160-170, (1982) [9] Golpalsamy, K., Global asymptotic stability in volterraʼs population systems, J. math. biol., 19, 157-168, (1984) [10] Hasminskii, R.Z., Stochastic stability of differential equations, Monographs and textbooks on mechanics of solids and fluids: mechanics and analysis, vol. 7, (1980), Sijthoff & Noordhoff Alphen aan den Rijn, The Netherlands · Zbl 0419.62037 [11] Higham, D.J., An algorithmic introduction to numerical simulation of stochastic differential equations, SIAM rev., 43, 525-546, (2001) · Zbl 0979.65007 [12] Hu, G.X.; Wang, K., Stability in distribution of competitive Lotka-Volterra system with Markovian switching, Appl. math. model., 35, 3189-3200, (2011) · Zbl 1228.34088 [13] Hu, Y.Z.; Wu, F.K.; Huang, C.M., Stochastic Lotka-Volterra models with multiple delays, J. math. anal. appl., 375, 42-57, (2011) · Zbl 1245.92063 [14] Ikeda Wantanabe, N., Stochastic differential equations and diffusion processes, (1981), North-Holland Amsterdam [15] Ji, C.Y.; Jiang, D.Q.; Shi, N.Z.; OʼRegan, D., Existence, uniqueness, stochastic persistence and global stability of positive solutions of the logistic equation with random perturbation, Math. methods appl. sci., 30, 77-89, (2007) · Zbl 1148.34040 [16] Ji, C.Y.; Jiang, D.Q.; Shi, N.Z., Analysis of a predator-prey model with modified Leslie-gower and Holling-type II schemes with stochastic perturbation, J. math. anal. appl., 359, 482-498, (2009) · Zbl 1190.34064 [17] Jiang, D.Q.; Shi, N.Z.; Zhao, Y.N., Existence, uniqueness, and global stability of positive solutions to the food-limited population model with random perturbation, Math. comput. modelling, 42, 651-658, (2005) · Zbl 1081.92039 [18] Jiang, D.Q.; Shi, N.Z., A note on nonautonomous logistic equation with random perturbation, J. math. anal. appl., 303, 164-172, (2005) · Zbl 1076.34062 [19] Jiang, D.Q.; Zhang, B.X.; Wang, D.H.; Shi, N.Z., Existence, uniqueness and global attractivity of positive solutions and MLE of the parameters to the logistic equation with random perturbation, Sci. China ser. A, 50, 977-986, (2007) · Zbl 1136.34324 [20] Jiang, D.Q.; Shi, N.Z.; Li, X.Y., Global stability and stochastic permanence of a non-autonomous logistic equation with random perturbation, J. math. anal. appl., 340, 588-597, (2008) · Zbl 1140.60032 [21] Klebaner, F.C., Introduction to stochastic calculus with applications, (1998), Imperial College Press · Zbl 0926.60002 [22] Kuang, Y.; Smith, H.L., Global stability for infinite delay Lotka-Volterra type systems, J. differential equations, 103, 221-246, (1993) · Zbl 0786.34077 [23] Langa, José A.; Rodríguez-Bernal, Aníbal; Suárez, Antonio, On the long time behavior of non-autonomous Lotka-Volterra models with diffusion via the sub-supertrajectory method, J. differential equations, 249, 414-445, (2010) · Zbl 1195.35178 [24] Li, M.Y.; Shuai, Z.S., Global-stability problem for coupled systems of differential equations on networks, J. differential equations, 248, 1-20, (2010) · Zbl 1190.34063 [25] Li, X.Y.; Mao, X.R., Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation, Discrete contin. dyn. syst., 24, 523-545, (2009) · Zbl 1161.92048 [26] Li, X.Z.; Tong, C.L.; Ji, X.H., The criteria for globally stable equilibrium in N-dimensional Lotka-Volterra systems, J. math. anal. appl., 240, 600-606, (1999) · Zbl 0947.34044 [27] Mao, X.R.; Marion, G.; Renshaw, E., Environmental Brownian noise suppresses explosions in population dynamics, Stochastic process. appl., 97, 95-110, (2002) · Zbl 1058.60046 [28] Mao, X.R.; Marion, G.; Renshaw, E., Asymptotic behaviour of the stochastic Lotka-Volterra model, J. math. anal. appl., 287, 141-156, (2003) · Zbl 1048.92027 [29] May, R.M., Stability and complexity in model ecosystems, (2001), Princeton University Press New Jersey [30] Moon, J.W., Counting labelled tree, (1970), Canadian Mathematical Congress Montreal · Zbl 0214.23204 [31] Murray, J.D., Mathematical biology, (1993), Springer-Verlag Berlin, Heidelberg · Zbl 0779.92001 [32] Polansky, P., Invariant distributions for multi-population models in random environments, Theor. popul. biol., 16, 25-34, (1979) · Zbl 0417.92019 [33] Strang, G., Linear algebra and its applications, (1988), Thomson Learning Inc. [34] West, D.B., Introduction to graph theory, (1996), Prentice Hall Upper Saddle River · Zbl 0845.05001 [35] Xiao, D.M.; Li, W.X., Limit cycles for the competitive three-dimensional Lotka-Volterra systems, J. differential equations, 164, 1-15, (2000) · Zbl 0960.34022 [36] Zhang, L.; Teng, Z.D., N-species non-autonomous Lotka-Volterra competitive systems with delays and impulsive perturbations, Nonlinear anal. real world appl., 12, 3152-3169, (2011) · Zbl 1231.37055 [37] Zhu, C.; Yin, G., Asymptotic properties of hybrid diffusion systems, SIAM J. control optim., 46, 1155-1179, (2007) · Zbl 1140.93045 [38] Zhu, C.; Yin, G., On competitive Lotka-Volterra model in random environments, J. math. anal. appl., 357, 154-170, (2009) · Zbl 1182.34078 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# MPM2D Sine Law – 2016-01-18 We learned about the Sine Law today. If you have an acute triangle ABC with corresponding opposite sides a, b, and c, then $\frac{sin(A)}{a}= \frac{sin(B)}{b}= \frac{sin(C)}{c}$ Or $\frac{a}{sin(A)}= \frac{b}{sin(B)}= \frac{c}{sin(C)}$ Complete page 402 #1-7 for homework.
{}
Question #1dda7 Oct 17, 2017 230 Explanation: Each bag contains 12 bolts, so the total number of bolts is 12 times the number of bags. Let's say we have $b$ bags, and according to the problem above, all of these bags combined have a total of 2760 bolts. Therefore, we can say that: $12 \times b = 2760$ If we divide both sides of this equation by 12, we solve for $b$: $\frac{12 \times b}{\textcolor{red}{12}} = \frac{2760}{\textcolor{red}{12}}$ $\frac{\cancel{12} \times b}{\cancel{\textcolor{red}{12}}} = 230$ $b = 230$ So the box contained 230 bags.
{}
Breaking a list into multiple columns in Latex Hopefully this is simple: I have a relatively long list where each list item contains very little text. For example: * a * b * c * d * e * f I wish to format it like so: * a * d * b * e * c * f I would rather not create a table with 2 lists as I want to be able to easily change the list without worrying about updating all the columns. What is the best way to do this in latex? - Very helpful, but should be migrated to tex.stackexchange.com –  Matthias Aug 19 '12 at 10:25 Using the multicol package and embedding your list in a multicols environment does what you want: \documentclass{article} \usepackage{multicol} \begin{document} \begin{multicols}{2} \begin{enumerate} \item a \item b \item c \item d \item e \item f \end{enumerate} \end{multicols} \end{document} - Great, thanks. Works like a charm. –  carl Sep 9 '09 at 22:26 If you don't like the numbers that enumerate automatically adds, try itemize instead of enumerate. –  Tim Stewart Sep 6 '10 at 19:52 Or enumitem which allows defining custom enumerate environments - e.g. I have an exenum-environment, so that each list of examples is enumerated the same way. –  moose Apr 4 at 9:49 I don't know if it would work, but maybe you could break the page into columns using the multicol package. \usepackage{multicol} \begin{document}
{}
# Chemistry - Water becomes cold on mixing energy drink ## Solution 1: Well, the solution enthalpy of sugars is positive. I found these numbers on the internet • $$\ce{C12H22O11}$$ (sugar(sucrose)) : 5.4 kJ/mol • $$\ce{C6H12O6}$$ (glucose) : 11 kJ/mol • $$\ce{C6H12O6·H2O}$$ (glucose monohydrate) : 19 kJ/mol So if your "energy drink" is a dry powder (and not a readymade drink in an aluminum can), this could explain your observation. You should however put a thermometer into your experiment, and get us some numbers. As is, my above is just another piece of guesswork. ## Solution 2: From the calculations (and from the enthalpy data posted by Karl, reference here), the following can be calculated: 1. Enthalpy of solution of glucose in the mixture, given by $$\Delta H_\text{glucose} = \frac{52}{100} \times 14.85 \times \frac 1 {180} \times 11000 = \pu{471.9 J}$$ 2. Enthalpy of solution of sucrose in the mixture, given by $$\Delta H_\text{sucrose} = \frac{45}{100} \times 14.85 \times \frac 1 {342} \times 5400 = \pu{105.51 J}$$ By calorimetry, we have $$Q = mc\Delta T = 200 \times 4.2 \times 0.7 = \pu{588 J}$$ Adding the enthalpies of solution should give us a value close to the heat provided by water, and indeed it does! $$\Delta H_\text{glucose} + \Delta H_\text{sucrose} = \begin{array}{|c|} \hline \pu{577 J} \approx \pu{588 J}\\ \hline \end{array}$$
{}
# Chord of a Circle Sub Topics A line segment whose end points lie on a circle is called a chord. In the adjoining figure, each of the line segments PQ, RS, AB and CD is a chord of the circle with centre O. Clearly, an infinite number of chords may be drawn in a circle. ## What is a Chord of a Circle? Two end points on the circumference of a circle joined together is known as chord of a circle. Here the diameter is the greatest chord of the circle. ## Chord of a Circle Formula If ‘ r ’ is the radius of the circle and ‘ a ‘ is the length of the arc, then length of the chord made by the arc is 2r * Sin($\frac{\theta}{2}$) Proof: If ‘ r ’ is the radius of the circle and ‘ $\theta$ ‘ is the angle made by the chord at the centre of the circle and length of the arc ‘ a ‘ Arc AB subtends an angle ‘ $\theta$ ’ at the centre. Where ‘ $\theta$ ‘ is the measure of angle in radians. The rule that involves length of the arc ‘ a ‘ and the central angle ‘ $\theta$ ’ and radius ‘ r ‘ is a = r$\theta$ - - - - - - - - - - - - - - - - - - - (i) AB is the chord of the circle, ‘ O ‘ is the centre of the circle, draw OM perpendicular to AB. M is the midpoint of AB and OM bisects $\angle$ AOB. Therefore, $\angle$ AOM = $\angle$ BOM = $\frac{\theta}{2}$ In triangle AMO, $\angle$ AMO = 900 Sin($\frac{\theta}{2}$) = $\frac{AM}{OA}$ Sin($\frac{\theta}{2}$) = $\frac{AM}{r}$ r * Sin($\frac{\theta}{2}$) = AM Since the length of the chord AB = 2 * AM = 2r * Sin($\frac{\theta}{2}$) Therefore, formula to find the length of the chord =   2r * Sin( $\frac{\theta}{2}$ ) ## Chord Length of a Circle Find the length of the chord of a circle if radius of the circle and central angle made by the chord are given. Proof: If ‘ r ’ is the radius of the circle and ‘ $\theta$ ‘ is the angle made by the chord at the centre of the circle. AB is the chord of the circle, ‘ O ‘ is the centre of the circle, draw OM perpendicular to AB. M is the midpoint of AB and OM bisects $\angle$ AOB. Therefore, $\angle$ AOM = $\angle$ BOM = $\frac{\theta}{2}$ In triangle AMO, $\angle$ AMO = 900 Sin($\frac{\theta}{2}$) = $\frac{AM}{OA}$ Sin($\frac{\theta}{2}$) = $\frac{AM}{r}$ r * Sin($\frac{\theta}{2}$) = AM Since the length of the chord AB = 2 * AM = 2r * Sin($\frac{\theta}{2}$) Therefore, formula to find the length of the chord =   2r * Sin($\frac{\theta}{2}$) ## How to Find the Chord of a Circle If ‘ r ‘ is the radius of the circle and ‘ p ‘ is the length of the perpendicular drawn to the chord from the centre of the circle then the length of the chord is given by 2 Proof: Given, a circle of radius ‘ r ‘ and ‘ p ‘ be the length of the perpendicular drawn from centre ‘O’ on the chord. AB is the chord of the circle, ‘ O ‘ is the centre of the circle, draw OM perpendicular to AB. In triangle AMO, $\angle$ AMO = 900 Using Pythagoras theorem, we get OA2 = AM2 + OM2 r2 = AM2 + p2 r2 - p2 = AM2 AM = 2 Since the length of the chord AB = 2 * AM = 2
{}
auctex-devel [Top][All Lists] ## Re: [AUCTeX-devel] Bug#624735: Displays \it with an italic '\' From: Josh Triplett Subject: Re: [AUCTeX-devel] Bug#624735: Displays \it with an italic '\' Date: Mon, 9 May 2011 23:45:53 -0700 User-agent: Mutt/1.5.21 (2010-09-15) On Mon, May 09, 2011 at 10:15:45PM +0200, Ralf Angeli wrote: > * Josh Triplett (2011-05-09) writes: > > > On Mon, May 09, 2011 at 09:40:39PM +0200, Ralf Angeli wrote: > >> > >> I'd say yes. Fontifying the whole macro looks more consistent to me > >> than fontifying only the part after the backslash. > > > > Doing so makes the '\' look like a '|'. > > I'm using DejaVu Sans Mono with a size of 17 pixels here and the two > characters can be distinguished easily. Maybe the font you are using is > suboptimal? DejaVu Sans Mono here as well. I can distinguish the characters, but I wouldn't say "easily"; it takes staring at the unusual-looking italic '\' more than once. :) > Also, in which context or use case would it be a big problem if the > characters where not easily distinguishable? In my case, I encountered this problem when staring at a complex TeX macro, which ran several commands in a row, along the lines of \foo\it\bar\baz. The italic '\' characters caused by the formatting of \it caused me quite a bit of confusion when trying to decipher it, until I figured out that they really did represent backslashes. It took me some time to parse the \it in the first place, since I didn't see the characteristic '\'. - Josh Triplett
{}
# Kernel subsets of transformations 1. Dec 12, 2012 ### dustbin 1. The problem statement, all variables and given/known data Let $T_1,T_2:ℝ^n\rightarrowℝ^n$ be linear transformations. Show that $\exists S:ℝ^n\rightarrowℝ^n$ s.t. $T_1=S\circ T_2 \Longleftrightarrow kerT_2\subset kerT_1$. 3. The attempt at a solution $(\Longrightarrow)$ Let $S:ℝ^n\rightarrowℝ^n$ be a linear transformation s.t. $T_1 = S\circ T_2$ and let $\vec{v}\in kerT_2$. Then $S(T_2(\vec{v})) = S(\vec{0}) = \vec{0}$ by linearity. Then $T_1(\vec{v}) = \vec{0}$. Thus $\vec{v}\in kerT_1 \quad \forall\vec{v}\in kerT_2$. Therefore $kerT_2 \subset kerT_1$. $(\Longleftarrow)$ Suppose that $kerT_2\subset kerT_1$ and choose $S:ℝ^n\rightarrowℝ^n$ s.t. $S$ is linear and $T_1 = S\circ T_2$. Then for $\vec{v}\in kerT_2,\quad T_1(\vec{v}) = S(T_2(\vec{v}) = S(\vec{0}) = \vec{0}.$ Thus there exists such a transformation. 2. Dec 12, 2012 ### Dick The first part seems ok. For the second, the problem is to show that such an S exists given ker(T2) is contained in ker(T1). Not to assume it exists. Last edited: Dec 12, 2012
{}
# Past events Find information about past ANU MSI events. 21 Mar 2017 ## Operator algebras in rigid C*-tensor categories » David Penneys (Ohio State) 3.30pm 21 March 2017 The notion of an algebra makes sense in any monoidal category C4. 21 Mar 2017 ## $L^p$ estimates for eigenfunctions on manifolds with boundary » Melissa Tacy (ANU) 1.30pm 21 March 2017 In this talk I will discuss the whispering gallery modes from a semiclassical perspective and introduce a method for studying such eigenfunctions semiclassically on general manifolds. 20 Mar 2017 ## Solving Random Differential Equations with HPC and UQ techniques » Tobias Neckel (Technical University of Munich) 4pm 20 March 2017 In the field of scientific computing, random effects become increasingly important to allow for an accurate modelling of realistic effects. 17 Mar 2017 ## G-crossed monoidal categories » Julia Plavnik (Texas A&M) 3.30pm 17 March 2017 Julia will tell us about G-crossed monoidal categories, and gauging symmetries of modular tensor categories. 17 Mar 2017 ## Honours Seminar » Joel Martin and Owen Cameron (MSI/ANU) 1pm 17 March 2017 Coming soon........ 15 Mar 2017 ## K-theory, K- homology, and index theory for orientifolds. » Simon Kitson 2.30pm 15 March 2017 I will discuss my recent work which is based on an extension of equivariant K-theory that allows implementation of the group action by both linear and anti-linear maps. 14 Mar 2017 ## On the classification of modular categories » Julia Plavnik (Texas A&M) 3.30pm 14 March 2017 The idea of the talk is to give an overview about the current situation of the classification program for modular categories. 14 Mar 2017 ## MSRVP Lectures on "Integrability, Duality and Deformed Symmetry" (Lecture 3) » Ctirad Klimcik (University of Marseille, Luminy) 2pm 14 March 2017 Three important concepts of classical and quantum physics - duality, integrability and (deformed) symmetry will be introduced and their interplay discussed. 10 Mar 2017 ## Honours Seminar » Sam Quinn & Kie Seng Nge (MSI/ANU) 1pm 10 March 2017 coming soon..... 08 Mar 2017 ## Sparse grid and its applications » Mr Yuancheng Zhou (MSI/ANU) 2.30pm 8 March 2017 The PhD seminar is a weekly seminar held on Wednesday afternoons, at 2:30-30pm, for PhD students to talk about their thesis work or other projects.
{}
During AMC testing, the AoPS Wiki is in read-only mode. No edits can be made. # Difference between revisions of "2017 AMC 8 Problems/Problem 13" ## Problem Peter, Emma, and Kyler played chess with each other. Peter won 4 games and lost 2 games. Emma won 3 games and lost 3 games. If Kyler lost 3 games, how many games did he win? $\textbf{(A) }0\qquad\textbf{(B) }1\qquad\textbf{(C) }2\qquad\textbf{(D) }3\qquad\textbf{(E) }4$ ## Solution Given $n$ games, there must be a total of $n$ wins and $n$ losses. Hence, $4 + 3 + K = 2 + 3 + 3$ where $K$ is Kyler's wins. $K = 1$, so our final answer is $\boxed{\textbf{(B)}\ 1}.$
{}
GR 8677927796770177 | # Login | Register GR9277 #14 Problem GREPhysics.NET Official Solution Alternate Solutions \prob{14} The total energy of a blackbody radiation source is collected for one minute and used to heat water. The temperature of the water increases from 20.0 degrees Celsius to 20.5 degrees Celsius. If the absolute temperature of the blackbody were doubled and the experiment repeated, which of the following statements would be most nearly correct? 1. The temperature of the water would increase from 20 degrees Celsius to a final temperature of 21 degrees Celsius. 2. The temperature of the water would increase from 20 degrees Celsius to a final temperature of 24 degrees Celsius. 3. The temperature of the water would increase from 20 degrees Celsius to a final temperature of 28 degrees Celsius. 4. The temperature of the water would increase from 20 degrees Celsius to a final temperature of 36 degrees Celsius. 5. The water would boil within the one-minute time period Statistical Mechanics$\Rightarrow$}Blackbody Radiation Formula Recall $P=ut\propto T^4, $ where $P$ is the power and $u$ the energy and $T$ the temperature. So, initially, the blackbody radiation emits $P_1=kT^4$. When its temperature is doubled, it emits $P_2=k(2T)^4=16kT^4$. Recall that water heats according to $Q=mc\Delta T= \kappa \Delta T$. So, initially, the heat gain in the water is $Q_1=\kappa (0.5^\circ)$. Finally, $Q_2=\kappa x$, where $x$ is the unknown change in temperature. Conservation of energy in each step requires that $kT^4t=\kappa/2$ and $16kT^4t=\kappa x$, i.e., that $P_i t = Q_i$. Divide the two to get $\frac{1}{16}=\frac{2}{x}\Rightarrow x=\Delta T = 8^\circ$. Assuming the experiment is repeated from the same initial temperature, this would bring the initial $20^\circ$ to $28^\circ$, as in choice (C). Alternate Solutions There are no Alternate Solutions for this problem. Be the first to post one! mpdude8 2012-04-19 18:15:51 Whenever I see the word "blackbody", I think T^4. It seems like they always ask a question to see if you know the correct exponent in the blackbody-temperature relationship. istezamer 2009-11-06 07:32:28 Of course we must start by knowing the fundamental equation that the Energy is proportional to Temperature^4.. so a double in temperature would increase the energy 16 folds!! Now if we take the initial energy to be one unit... 1 unit increase the temperature 0.5 degrees 16 unit would increase the temperature 16 times 0.5 which is 8 degrees!! SO the correct choice is (C). pam d2011-09-28 09:56:39 Careful with the terminology, you should say that "radiative power" is proportional to temperature raised to the fourth power. Since the amount of time the experiment is run does not change, we have in this specific instance that the amount of energy transferred is proportional to temperature to the fourth power. spacebabe47 2006-10-31 19:38:13 Dividing the two equations will actually get 1/16=.5/x 1/16=1/2x x=8 Andresito 2006-03-24 19:20:59 In the expression P = u*t Power = energy/time Thus, P = u/t and this is consistent with Q = P*t LaTeX syntax supported through dollar sign wrappers $, ex.,$\alpha^2_0$produces $\alpha^2_0$. type this... to get...$\int_0^\infty$$\int_0^\infty$$\partial$$\partial$$\Rightarrow$$\Rightarrow$$\ddot{x},\dot{x}$$\ddot{x},\dot{x}$$\sqrt{z}$$\sqrt{z}$$\langle my \rangle$$\langle my \rangle$$\left( abacadabra \right)_{me}$$\left( abacadabra \right)_{me}$$\vec{E}$$\vec{E}$$\frac{a}{b}\$ $\frac{a}{b}$
{}
## Measuring the Impact of Influence on Individuals: Roadmap to Quantifying Attitude ##### Date 2020-10-26T03:21:29Z ##### Authors Fu, Xiaoyun Kumar, Raj Gaurav Basu, Samik Pavan, A. ##### Organizational Units Sociology Organizational Unit Computer Science Organizational Unit ##### Department SociologyComputer Science ##### Abstract Influence diffusion has been central to the study of propagation of information in social networks, where influence is typically modeled as a binary property of entities: influenced or not influenced. We introduce the notion of attitude, which, as described in social psychology, is the degree by which an entity is influenced by the information. We present an information diffusion model that quantifies the degree of influence, i.e., attitude of individuals, in a social network. With this model, we formulate and study attitude maximization problem. We prove that the function for computing attitude is monotonic and sub-modular, and the attitude maximization problem is NP-Hard. We present a greedy algorithm for maximization with an approximation guarantee of $(1-1/e)$. Using the same model, we also introduce the notion of "actionable" attitude with the aim to study the scenarios where attaining individuals with high attitude is objectively more important than maximizing the attitude of the entire network. We show that the function for computing actionable attitude, unlike that for computing attitude, is non-submodular and however is \emph{approximately submodular}. We present approximation algorithm for maximizing actionable attitude in a network. We experimentally evaluated our algorithms and study empirical properties of the attitude of nodes in network such as spatial and value distribution of high attitude nodes.
{}
# Rules for whether an $n$ degree polynomial is an $n$ degree power Given an $n$ degree equation in 2 variables ($n$ is a natural number) $$a_0x^n+a_1x^{n-1}+a_2x^{n-2}+\cdots+a_{n-1}x+a_n=y^n$$ If all values of $a$ are given rational numbers, are there any known minimum or sufficient conditions for $x$ and $y$ to have: 1. Real number 2. Rational number 3. Integer solutions and how many of them would exist. If it is not known/possible (or too hard) for an $n$ degree polynomial, do such conditions exist for quadratic ($n=2$) and cubic ($n=3$) polynomials. • For every positive value of polynomial there exist a real solution to $y$ Dec 14, 2014 at 13:25 • Why has someone given a close vote on this? I'm just looking for general rules, such as when is the solution an integer and so forth, not a specific answer. Dec 14, 2014 at 17:40 • @user45195: Perhaps it is better if the question is edited so the $a_i$ and $x,y$ are just in the rationals. Dec 18, 2014 at 1:00 • Yes, I've edited it. Dec 21, 2014 at 18:14 This addresses user45195's question and is too long for a comment. When I said too broad, it was because the question originally didn't limit the field of $x$. A familiar field is the complex numbers $\mathbb{C}=a+bi$, of which a special case are the reals $\mathbb{R}$, and even more limited, the rationals $\mathbb{Q}$. If $x$ was in $\mathbb{C}$, then it's just an old result (Fundamental Theorem of Algebra), that for any $y$, one can always find $n$ roots $x$ that solve the equation in your post, and there's nothing new to be said. However, if $x,y$ are limited to the rationals $\mathbb{Q}$, that's where it gets interesting. The equation, $$f(x) = y^2\tag1$$ where the degree $d$ of $f(x)$ is either $d = 2,3,4,5$ has been extensively studied. See algebraic curve, including Pell equations ($d=2$), elliptic curve ($d=3,4$), and hyperelliptic curve ($d=5$). For, $$f(x) = y^3\tag2$$ then $d=3$ still has special cases as elliptic curves. For $d=4$, see trigonal curve. However, for the higher, $$f(x) = y^m\tag3$$ where $d,m>3$, is more complicated. See superelliptic curve. • So, for say a 10 degree polynomial, brute force is the only way to find solutions? Dec 21, 2014 at 14:17 • @user45195: Well, the most obvious way. But for lower-deg polys there are alternative methods and also, given an initial rational point, generally an infinite more can be found systematically (not brute). Dec 21, 2014 at 16:26 • Thank you, I was asking from a computational point of view. Dec 21, 2014 at 18:14 (The OP suggested a connection to this post.) Because this question is too broad, it spreads thin and may be vague. I suggest it be limited so coefficients $a_i$ are rational, and $x,y$ are also rational. Having said that, two nice results are discussed in Kevin Brown's website. I. Deg 2: The sum of $24$ consecutive squares. $$F(x) = x^2+(x+1)^2+(x+2)^2+\dots+(x+23)^2=y^2\tag1$$ $$F(x) = 24x^2+552x+4324=y^2$$ which has solution, $$x=p^2+70pq+144q^2,\quad\quad y =10(7p^2+30pq+42q^2)$$ where $p,q$ solve the Pell equation $p^2-6q^2=1$. This has an infinite number of integer solutions with the case $p,q = 1,0$ yielding the famous cannonball stacking problem, $$1^2+2^2+3^2+\dots+24^2 =70^2$$ II. Deg 3: The sum of $n$ consecutive cubes. $$G(x) = x^3+(x+1)^3+(x+2)^3+\dots+(x+n-1)^3=y^3\tag2$$ $$G(x) = n x^3 + \tfrac{1}{4}n(n - 1)\big(6x^2 + 4 n x - 2x + n(n - 1)\big) =y^3$$ a solution of which (by Dave Rusin) is, $$x=\tfrac{1}{6}(v^4 - 3v^3 - 2v^2 + 4),\quad\quad n=v^3$$ hence for $2^3=8$ and $4^3=64$ cubes, $$(-2)^3+(-1)^3+\dots+3^3+4^3+5^3 = 6^3$$ $$6^3+7^3+8^3+9^3+10^3+\dots+69^3 = 180^3$$ and so on. III. Deg 4: The sum of 4th powers in arithmetic progression. No analogous results known so far. See linked post in first line. • Thank you, but I never mentioned that terms are in an AP. Dec 18, 2014 at 9:26 • @user45195: Yes, I had to make assumptions on your $x,y$ because the question is too broad. One can start with special cases, and may then get interesting results. By the way, what field is your $x$ supposed to be in? (I think that omission is the reason for someone's down-vote.) Dec 18, 2014 at 17:03 • a) If it is too broad, does that mean there exists no general solution to solve such equations (like the way roots can directly be found for quadratic equations, etc.)? b) I'll edit the question to include only rational coeffecients. c) What do you mean by field? Dec 19, 2014 at 11:08 You can determine whether a real solution to that equation exists; obviously, there is always one if $n$ is odd, since the left side, $y^n$ can be chosen, independent of the right side, to equal whatever we want. For even $n$, we want to know whether the left hand polynomial is ever positive - this clearly holds if $a_0$ is positive, since the function would tend to $\infty$ as $|x|$ gets large. However, if $a_0$ is negative, then the left hand function will be non-negative somewhere if and only if it has at least one real root. Thus, the problem would then collapse to determining if a univariate polynomial has a root and, in general, this is possible to accomplish computationally via Sturm sequences.
{}
# 7.3.4 - Search for molecular mechanism of cell enlargement It is pretty obvious from this behaviour that simply putting water around a tissue creates a system with several interacting factors. The growth rate could be affected not only by $P$ but also by $m$, $L$, $\Pi$, or $P_{th}$. In an effort to simplify this system, scientists have sought single cells that could be surrounded by water. From a practical point of view this allows $P$ and $m$ to be the main factors controlling growth and minimises the effect of $L$, as shown in Figure 7.27. ## 7.3-Ch-Fig-7.27.jpg Figure 7.27 Simplified version of Fig. 5 for a single cell surrounded by water ($\Psi_o = 0$). $L$ is so large that $P$ and $m$ control the growth rate if $\Pi$ and $P_{th}$ are constant. (Diagram courtesy JS Boyer) One approach has been to use single algal cells large enough to measure $P$ and growth ($\frac{dV}{dt}$) simultaneously. Chara corallina or Nitella flexilis are candidates because they have cells large enough for the measurements. They are naturally surrounded by fresh or brackish water and have rhizoids resembling roots. Gametes form in structures in the axils of branches analogous to flowers or cones in their land counterparts. In fact, genomic and morphological analyses consider these algae to be among the closest relatives of the progenitors of land plants. In the internode cells of Chara or Nitella, microfibrils are oriented normal to the cell axis and the walls expand mostly in length like many plant tissues (roots, stems, grass leaves). Using the internodes of these species, $P$ and growth rate ($\frac{dV}{dt}$) can be monitored simultaneously and changed so quickly that $\Pi$ and $P_{th}$ remain constant. This allows the $P$ response to be rigorously determined. Moreover, the walls can be isolated without leaving the medium in which the algae are grown. The same measurements can be repeated without the cytoplasm. This is a great tool for observing the response to $P$. When the growth of the live cells was compared with that in the isolated walls, they were similar but only for the first hour or so. After that, growth ceased in the walls but continued in the live cells even though the walls and cells had the same $P$. Something was missing in the isolated wall that was being supplied by the live cells. Considering that new wall material is supplied by the cytoplasm and missing in the isolated walls, if seemed reasonable to supply new wall constituents as though the cytoplasm had done so. Supplying pectin (a wall polymer) to the growth medium returned the growth rate of isolated walls to the rate in the live cells! This was unexpected but indicated the wall needed a supply of pectin in order to continue growing. The active pectin was a linear unbranched polymer of α-1,4-D-galacturonic acid sometimes with a small amount of rhamnose (usually 1-2%) that is normally synthesised in the cytoplasm and released to the wall by exocytosis. It becomes a prominent member of the wall matrix and forms a gel embedding the cellulose microfibrils. The pectin gels because calcium ions bind to neighbouring pectin polymers. The cross-bridging forms junction zones with the polymers that are strong enough for the pectin to form a gel solid. The gel gets stronger with more cross-bridges. The new pectin from the cytoplasm removed some of the cross-bridges from the wall, weakening the wall gel and allowing the polymers to slip a little. This action occurred only when $P$ was above $P_{th}$, and only when temperatures were warm enough for growth. Figure 7.28 compares the turgor pressure and temperature responses of the live algal cells with those of land plants tissues. ## 7.3-Ch-Fig-7.28.jpg Figure 7.28 Turgor pressure and temperature responses of single algal cells compared to land plant tissues. Turgor pressure and growth in (A) Chara internode cell and (B) sunflower (Helianthus annuus) leaves at 25 °C. Temperature and growth in (C) Chara internode cell and (D) soybean hypocotyls at turgor pressure of 0.4 to 0.5 MPa. (JS Boyer, Funct Plant Biol 36: 385-394) A chemical mechanism has been proposed to account for this behaviour and is called a “calcium pectate cycle” (Figure 7.29). It seems possible that the chemistry might also occur in land plants. As long as pectin, calcium, and sufficient turgor pressure are present, the cycle should occur in pectin-containing walls. Pectins are among the most conserved components of cell walls during plant evolution, and the similarity in pressure and temperature response in these algae and land plants suggests a common mechanism in both. However, despite these intriguing similarities, definitive tests remain for the future. ## 7.3-Ch-Fig-7.29.jpg Figure 7.29 Proposed mechanism of cell enlargement in Chara. The diagram shows the calcium pectate cycle occurring in the cell wall for two calcium pectate cross-bridges (black ovals in anti-parallel pectate molecules, left side of figure). Turgor pressure is high enough to distort the egg-box in one of the pair, weakening its bonds with calcium (left pectate in pair). New pectate from the cytoplasm (dashed red arrow) is undistorted and preferentially removes calcium from the weakened and distorted pectate (step 1, red). The load-bearing pectate relaxes after its cross-bridging calcium is removed. The wall elongates incrementally, shifting the load to the other member of the pectate pair, which distorts. The remaining steps 2 to 4 follow by depositing calcium pectate (step 2, blue) and new calcium from the medium plus new pectate from the cytoplasm (step 3, green), resulting in a cycle (step 4, black). The net result is elongation plus wall deposition. Although shown for only two cross-bridged pectate molecules, the same principles apply to larger numbers of cross-bridges. Note that in Chara the cycle occurs in the medium in which the cells are grown (0.6 mM Ca2+). Also note that the rate of growth depends on the rate of pectate release from cytoplasm to wall by exocytosis (red and green dashed arrows). Each step in the diagram was demonstrated experimentally in Chara. (JS Boyer, Front Plant Sci 7:866, 2016) ### Expansins In other experiments a class of cell wall proteins, expansins, are proposed as potential agents for catalysing yielding in vivo (McQueen-Mason 1995). Figure 7.30 shows sharp gradients in growth along the hook of a cucumber hypocotyl that are paralleled by a gradient in extension of these tissues when stretched under acid conditions (Figure 7.30b) but not at neutral pH (Figure 7.30c). When tissues were killed by boiling, extension was blocked (Figure 7.30d). From these results, it seems that hypocotyl extension requires acid pH and non-denatured proteins. However, all the experiments were done in an extensometer with a uni-axial pull substantially less than the multi-axial tension exerted by $P$. Since growth requires $P$ above a threshold in order for walls to yield (Eq. 7), it is difficult to interpret this proposal. When greater uni-axial pull was used by Ezaki et al. (2005) to study hypocotyl growth in soybean hypocotyls, pectin chemistry appeared to determine growth rate. In fact, Zhao et al. (2008) give evidence that pectins may be the target of expansin action. ## 7.3-Ch-Fig-7.30.png Figure 7.30 Distribution of growth and wall extension at four positions along a cucumber hypocotyl. (a) Growth rate is most rapid near the hook. (b) Hypocotyl segments were frozen, thawed, abraded and stretched under a 20 g load in an acidic buffer (pH 4.5), which is much less tension than exerted by $P$. Most rapid extension occurred in the fast-growing hook. (c) When measured at pH 6.8, segments extended very little. (d) Segments in which enzymes were denatured by boiling did not extend under the load. (SJ McQueen-Mason, J Exp Bot 46: 1639-1650, 1995) Notice that the growth mechanism in Figure 7.29 is entirely chemical, with no role for enzymes. It is difficult to hypothesize an enzymatic mechanism because of the requirement for $P$ above a threshold. Enzyme activity is generally unaffected by these pressures and would continue acting regardless of $P$. What is clear is that the biophysical consequences of instantaneous changes in $P$ will be followed by a phalanx of biochemical events including wall polymer synthesis and altered gene expression, and rigorous methods will be required to distinguish enzymatic from biophysical hypotheses. For instance, sustained expansion of plant cell walls cannot be explained simply by inexorable wall hydrolysis; if it were, cell walls would weaken to breaking point during growth. The ‘setting’ of long-term cell expansion rates is likely to hinge on biochemical and chemical events underlying wall relaxation and reinforcement. ### Cessation of cell wall expansion Molecular events leading to cessation of wall expansion are even less well understood than those which initiate growth. For example, part of the growing region stops growing when water deficits occur around maize roots (Figure 7.31). ## 7.3-Ch-Fig-7.31.png Figure 7.31 Spatial distribution of (a) elongation rates and (b) turgor pressures along apical zones of maize roots grown either in hydrated (Ψ = -0.02 MPa; filled circles) or rather dry ($\Psi$ = -1.6 MPa; open circles) vermiculite. Note that water deficit only depressed growth at positions more than 2 mm from the apex but $P$ was lower at all positions in water deficient roots. (WG Spollen and RE Sharp, Plant Physiol 19: 565-576, 1991) Clearly, the region farthest from the tip has stopped growing but $P$ remains uniform throughout the zone. $P$ is lower in the water deficient roots presumably because less water can be absorbed from the water deficient soil. A common view is that sufficient cross-linking develops to limit the extension of the matrix around cellulose microfibrils and prevent further wall expansion. Essentially, when a cell has reached its final dimensions its wall is ‘locked’ into a final, hardened conformation. From the description above, molecules with a specific role in growth cessation are thought to be exocytosed into cell walls, providing either substrates for cross-linkage reactions or enzymes catalysing cross-linkage of pre-existing wall polymers. Identification of cross-linkage reactions have led to a search for their presence in vivo. For example, ferulic acid residues in grass cell walls can cross-link to produce di-ferulic acid and potentially stiffen walls through formation of a polysaccharide-lignin network. Unfortunately, in rice coleoptiles the abundance of the di-ferulic form bore no relation to growth cessation. Also this form of stiffening might be difficult to distinguish from secondary wall deposition. Secondary cell walls generally form after primary walls have ceased to grow but the familiar rigidity of secondary cell walls (e.g. wood) is mostly viewed as distinct from stiffening of primary walls. Lignification of primary walls commences earlier than once thought and is a possible factor in growth cessation (Müsel et al. 1997). Such a response might be controlled through release of peroxide into walls in much the same way as seen in walls subject to fungal attack. Peroxidase enzymes are candidates for the catalysis of these reactions. Understanding rigidification of this complex matrix of polymers demands input from the disciplines of biology, chemistry and physics. Combining established techniques with novel approaches to the study of individual cells (e.g. Fourier-Transform Infra-red microspectroscopy and the cell pressure probe) will bring new insights to the molecular basis of wall expansion.
{}
# Convert watt to hp Swap conversion: hp to watt ## Facts and Curiosities The Watt equals one joule per second. For example, if a 60W lamp remains on for one hour, the power consumed is 60 watts-hour. This would be the same amount of energy that would make a 120-watt bulb shine for half an hour. To get an idea of how much Watt represents, here are some reference values of the power of some household appliances: stereo, 200 watts; vacuum cleaner, 1000 watts; computer, 400 watts; exhaust fan, 300 watts; electric oven, 5000 watts; hair dryer, 1300 watts; LCD television, 150 watts. The name of this power unit is a tribute to the British engineer James Watt, for the recognition of the many researches that he made, to better understand the energy, as well as, the way in which he created machines that obtained a better use of it. The Horsepower came about when, for the first time in history, people had something like power to use other than animal traction. This "something" was a steam engine and James Watt pioneered the construction of these steam engines. His target audience was precisely the people who used horses to do the heavy work, which would then be done by machines. Sales did not go well at first, so to help sell the machine it was necessary to equate the work developed by the machine with the work done by a horse. That is why the horsepower measurement unit was created. To be more precise in assessing this unit of measure, James Watt has carried out various experiments in order to measure the work performed by a horse for a certain period of time. From there he just needed to see how long it took his steam engine to do the same job. ## Math Formula hp=W/745.69987 ## Conversion Examples watt to hp watt to hp 10 W = 0,0134 hp 2000 W = 2,6820 hp 15 W = 0,0201 hp 2500 W = 3,3526 hp 20 W = 0,0268 hp 5000 W = 6,7051 hp 25 W = 0,0335 hp 7500 W = 10,0577 hp 50 W = 0,0671 hp 10000 W = 13,4102 hp 100 W = 0,1341 hp 12500 W = 16,7628 hp 200 W = 0,2682 hp 15000 W = 20,1153 hp 250 W = 0,3353 hp 17500 W = 23,4679 hp 500 W = 0,6705 hp 20000 W = 26,8204 hp 1000 W = 1,3410 hp 25000 W = 33,5256 hp Do you like? Please Share! ## Send a Comment While great efforts have been made to ensure the accuracy of our conversion formulas, as well as all other information that is presented on our website, we can not give any guarantee or be held responsible for any errors that may have been made by our automatic calculators. As such, we urge our users to immediately contact with us if they find any error in the conversions made. Of course, we will try to correct any reported anomalies as quickly as possible! Thank you.
{}
# On the arXivotubes If I didn’t have to be modifying and debugging a network-growth simulation today, I’d be reading these papers. The first is not too far from one subject I’m researching right now. Axelsen et al. (arXiv:0711.2208) write about “a tool-based view of regulatory network topology”: The relationship between the regulatory design and the functionality of molecular networks is a key issue in biology. Modules and motifs have been associated to various cellular processes, thereby providing anecdotal evidence for performance based localization on molecular networks. To quantify structure-function relationship we investigate similarities of proteins which are close in the regulatory network of the yeast Saccharomyces cerevisiae. We find that the topology of the regulatory network show weak remnants of its history of network reorganizations, but strong features of co-regulated proteins associated to similar tasks. This suggests that local topological features of regulatory networks, including broad degree distributions, emerge as an implicit result of matching a number of needed processes to a finite toolbox of proteins. The second, Gaume and Forterre’s “A viscoelastic deadly fluid in carnivorous pitcher plants” (arXiv:0711.4724, also in PLoS ONE), is pertinent to my eventual life goal of building an army of atomic super-plants. # Lent of Physics Blogging For a while, we had a blog carnival of physics writing, Philosophia Naturalis. However, it looks rather moribund today: the last installment to date was on 4 October (at Dynamics of Cats), and the “next available hosting opportunity” was the first of November, which is already almost a month gone. Combined with the recent description of the physics blogoweb as “an intellectual wasteland,” and we’ve got plenty of excuses to feel a little depressed. Oh, and have you noticed that ScienceBlogs.com still can’t do math notation? So much for discussing science the way that, you know, actual scientists do. So much for reflecting the increasing quantitative aspect of the life sciences, discussing interdisciplinary work, or doing anything beyond the same old carping over innumeracy. Maybe they’re intimidated by that old “each equation cuts the readership in half” bromide. Or maybe they think that allowing the use of calculus and other such scary mathematics would be “bad framing.” Continue reading Lent of Physics Blogging # Vericon It looks like both Lois Lowry and Randall Munroe will be at Vericon 2008, this next January. You know, for a guy who spent most of his teenage years either reading or writing science fiction, I sure don’t know much about this “fandom” thing. I mean, I have some impressive stories about the Trek convention in Huntsville eleven years ago, but I seem to have missed the whole game ever since then. # Sagan/Smith Mashup OK, the Silliness Department has not yet clocked out. For this piece, the audio comes from episode 9 of Cosmos: A Personal Voyage (1980). # ERV Ruins My Evening This is, I suspect, the logical limit of the Snakes On A Plane style of media production: let the Internet write it for you. Tip o’ the bucket to Abbie. # Evolution and Turing Machines I haven’t made any headway on the Grey Lady’s Top 100 Books of the Year, partly because I’ve been too busy reading what comes down the arXivotubes. For example, take Giovanni Feverati and Fabio Musso’s recent e-print, “An evolutionary model with Turing machines” (arXiv:0711.3580, 22 November). The development of a large non-coding fraction in eukaryotic DNA and the phenomenon of the code-bloat in the field of evolutionary computations show a striking similarity. This seems to suggest that (in the presence of mechanisms of code growth) the evolution of a complex code can’t be attained without maintaining a large inactive fraction. To test this hypothesis we performed computer simulations of an evolutionary toy model for Turing machines, studying the relations among fitness and coding/non-coding ratio while varying mutation and code growth rates. The results suggest that, in our model, having a large reservoir of non-coding states constitutes a great (long term) evolutionary advantage. Taking the broad view, it’s interesting that a large mass of genetic information might not directly code for phenotypic features while still having a long-term adaptive advantage. (Thinking about fitness over multiple generations gives me a headache: the mapping from phenotype to fitness value isn’t constant over time. Cluster a whole bunch of predators together, and they kill off all the prey, changing their environment and thereby making their own phenotype unfit. Some people really care about this; I just know it makes me want to go back to neutrino physics.) A few details of note: First, Feverati and Musso evolve their Turing machines to specified goals: they want a machine which ends its operation with a tape containing a particular sequence of zeros and ones. In different trials, two distinct goal tapes were used, a tape representing the first 100 prime numbers and a tape holding the bits after the radix point in the binary expansion of π. (These goals were chosen in part because a periodic distribution is an easy thing for a Turing machine to make. I suppose we’re talking about maximizing Kolmogorov complexity, though the authors don’t make that connection explicitly.) Second, the only form of mutation in their simulation was point mutation. More specifically, they randomly changed each entry in a Turing machine’s description with some probability $$p_{\rm m}$$. Gene duplication and other such mechanisms were not implemented. States are added with a certain probability $$p_{\rm i}$$ per timestep; these states are non-coding until a point mutation elsewhere establishes a call to them. UPDATE (28 November): A real biologist offers comments below. The number of people who know more than I do about any given subject is awfully impressive! UPDATE (1 December): More here. # Yawn: More Abuse of the Quantum Binocular rivalry is a phenomenon which occurs when conflicting information is presented to each of our two eyes, and the brain has to cope with the contradiction. Instead of seeing a superimposition or “average” of the two, our perceptual machinery entertains both possibilities in turn, randomly flickering from one to the other. This presents an interesting way to stress-test our visual system and see how vision works. Unfortunately, talk of “perception” leads to talk of “consciousness,” and once “consciousness” has been raised, an invocation of quantum mechanics can’t be too far behind. I’m late to join the critical party surrounding E. Manousakis’ paper, “Quantum theory, consciousness and temporal perception: Binocular rivalry,” recently uploaded to the arXiv and noticed by Mo at Neurophilosophy. Manousakis applies “quantum theory” (there’s a reason for those scare quotes) to the problem of binocular rivalry and from this hat pulls a grandiose claim that quantum physics is relevant for human consciousness. A NOTE ON WIRES AND SLINKYS First, we observe that there is a healthy literature on this phenomenon, work done by computational neuroscience people who aren’t invoking quantum mechanics in their explanations. Second, one must carefully distinguish a model of a phenomenon which actually uses quantum physics from a model in which certain mathematical tools are applicable. Linear algebra is a mathematical tool used in quantum physics, but describing a system with linear algebra does not make it quantum-mechanical. Long division and the extraction of square roots can also appear in the solution of a quantum problem, but this does not make dividing 420 lollipops among 25 children a correlate of quantum physics. Just because the same equation applies doesn’t mean the same physics is at work. An electrical circuit containing a capacitor, an inductor and a resistor obeys the same differential equation as a mass on a spring: capacitance corresponds to “springiness,” inductance to inertia and resistance to friction. This does not mean that an electrical circuit is the same thing as a rock glued to a slinky. MIXING THE QUANTUM AND THE CLASSICAL One interesting thing about this paper is that the hypothesis is really only half quantum, at best. In fact, three of the four numbers fed into Manousakis’ hypothesis pertain to a classical phenomenon, and here’s why: Manousakis invokes the formalism of the quantum two-state system, saying that the perception of (say) the image seen by the left eye is one state and that from the right eye is the other. The upshot of this is that the probability of seeing the illusion one way — say, the left-eye version — oscillates over time as $$P(t) = \cos^2(\omega t),$$ where $$\omega$$ is some characteristic frequency of the perceptual machinery. The oscillation is always going, swaying back and forth, but every once in a while, it gets “observed,” which forces the brain into either the clockwise or the counter-clockwise state, from which the oscillation starts again. The quantum two-state system just provides an oscillating probability of favoring one perception, one which goes as the square of $$\cos(\omega t)$$. Three of the four parameters fed into the Monte Carlo simulation actually pertain to how often this two-state system is “observed” and “collapsed”. These parameters describe a completely classical pulse train — click, click, click, pause, click click click click, etc. What’s more, the classical part is the higher-level one, the one which intrudes on the low-level processing. Crudely speaking, it’s like saying there’s a quantum two-state system back in the visual cortex, but all the processing up in the prefrontal lobes is purely classical. Continue reading Yawn: More Abuse of the Quantum # Rejecta Mathematica Walt and Isabel are talking about the newest oddity in mathematics publishing: a forthcoming journal called Rejecta Mathematica. This will be an online journal dedicated to mathematical papers which have been rejected from peer-reviewed publications. Such a journal could be a useful publication venue: papers which show that a promising technique fails or which reprove a known theorem in a not-quite-snazzy way might be worth collecting. Furthermore, it would be neat to look at a probability argument or some “entropy” bafflegab from a cdesign proponentsist and say, “This couldn’t even be published in Rejecta! # Cuttlefish Wins Teh Internets To those familiar with the Cuttlefish‘s habits, this is, of course, old news. I wouldn’t think you could get anything useful out of a blogospheric ramble about Blavatsky and Theosophy, but the Digital Cuttlefish was able to see past the blather about “fifth race humans” and the “girasas race” to find artistic and comedic gold, with just the proper bite: Ceiling Cat is watching you post From up in his lofty location — The comments make Ceiling Cat shudder and say “O Hai. You can has medication.” (Image from the Lolcat Bible.) Radiohead audio and insect video. . . what more do you need? Tip o’ the beret to Bug Girl. # Cectic Speaks for Me I don’t have this conversation in person, because I hang out with academics, engineers and martial artists whose idea of “ancient Eastern wisdom” runs to better ways of extracting an enemy’s eyeball from its socket during combat. However, it happens on the Internet often enough to be all too familiar: # Vacation Memories 2: Baggage I’m back home from my brief travels, and I returned to find the latest outbreak of quantum woo infection, followed immediately by a heap of silliness about anthropic twaddle. “Too soon,” I thought. “I need to go back on vacation.” So, instead of complaining at great length about things I’ve already complained about, I’ll just share one quick observation and then head out into the outside world, shopping for art supplies. Yesterday, I flew into Boston. In my laptop I carried a hardback of Lois Lowry‘s The Giver (1993) and, to recapture a more innocent time, Feynman and Weinberg’s Elementary Particles and the Laws of Physics (1987). In between reading these two, I happened to glance at the pamphlet-type thing which the airline clerk had given me to hold my boarding passes in. Here’s the puzzling part, under the “Free Baggage Allowance” heading: Carry-on Baggage is limited to one piece per passenger, plus a personal item such as a purse, briefcase or laptop computer. The carry-on cannot exceed 51 inches (11″ × 14″ × 26″) and must fit under the seat or in an overhead compartment. Why are the three linear dimensions added? The frame device the airline positions at each gate for testing whether or not your carry-on will fit rejects your baggage if any dimension exceeds the threshold set. Your baggage is deemed invalid even if the total volume is less than 11″ × 14″ × 26″ = 4004 in3 (just try carrying on something long and skinny). The longest diagonal of an 11″ × 14″ × 26″ box is $$\sqrt{14^2 + 26^2} \approx 29.5$$ inches long. So, you can have a carry-on item the sum of whose edge lengths is, say, thirty-one inches, and which won’t fit the actual airline restrictions no matter how you try to wedge it in sideways. The sum of the height, width and depth is a meaningless number. # Vacation Memories 1: Unbuilding One of the fringe benefits of visiting family is that I get to reacquaint myself with the books among which I grew up. Prominent among my warm and sunlit memories of bookworm-hood are David Macaulay‘s illustrated volumes. Macaulay’s career began with Cathedral (1973), the story of a medieval town building a Gothic cathedral; he followed with Pyramid (1975) and Castle (1977), among others, and his whimsy grew to (heh heh) mammoth proportions with The Way Things Work (1988). The book I’d like to talk about today, however, is Unbuilding (1980). After describing how all sorts of great architectural works were built, Macaulay decided to explain how one gets taken apart, and for his example, he chose the Empire State Building. The whole book is full of fascinating details on how a building can be demolished, with some portions preserved for re-erection elsewhere, but those details aren’t what caught my eye. In each of his illustrated architecture books, Macaulay provides a back story, telling who’s building the title piece of the book and why. Naturally, he constructs a story for the Empire State Building’s demolition, too. Prince Ali Smith, a Saudi oil tycoon educated in the United States, needs a new corporate headquarters for the Greater Riyadh Institute of Petroleum (GRIP). To the surprise and bafflement of his fellow executives, Prince Ali announces that GRIP will buy the Empire State Building, dismantle it, and rebuild it in the Arabian desert. What happens next, I quote below the fold: Continue reading Vacation Memories 1: Unbuilding # Glum in the Bookstore Over at the Hellfire Club, Russell Blackford has been writing 15,000 words on American science fiction for a big zarkin’ literary encyclopedia. Part of his job seems to be the invention of history: he gets to write about “contemporary” science fiction, the writing on which judgments have not yet been made. And while talking about books is always fun, saying things which have never been said before about them is even better. (Yo, transhumanists: is there any market which will pay me to discourse on how the practice of “tubing” fetuses in the Honorverse, and particularly in the novel At All Costs (2005), is the science-fictional antithesis of Brave New World (1932)? ) Anyway, having this freshly in mind put me into a bit of a melancholy mood last night, while I was wandering through the local Barnes-and-Borders-A-Million. (I’m visiting family in a town where there’s not much else to do.) From the looks of their science-fiction section, the surest way to get published is to write a Star Wars or Star Trek novel. To paraphrase Mr. Spock, that’s a situation which calls for a colorful metaphor. I mean, do we really want the New Jedi Order to be the public face of contemporary SF? # Squidtivity A famous example of the troubles involved in Biblical translation is the expression “lamb of God.” How do you convey the idea — the cute animal which gets killed to sate bloodlust — to a culture which doesn’t know about lambs? If you were trying to translate the New Testament for the Inuit, to cash in on the lucrative Nunavut market, you might go with “seal of God” instead. I don’t think anybody has yet gone with “squid of God,” however. So, to provide at least a partial remedy, I give you Squidtivity! I wonder why Sunday school forgot to mention that the wise men came from R’lyeh. Tip o’ the Magi’s crown to Retrokatze.
{}
# Clippings from A Philosophy of Software Design ## Preface • On the Criteria To Be Used in Decomposing Systems into Modules • The most fundamental problem in computer science is problem decomposition • The central design task we face every day • We teach for loops and object-oriented programming, but not software design. • There is a huge variation in quality and productivity among programmers • We have made little attempt to understand what makes the best programmers so much better or to teach those skills in our classes. • Outstanding performance in many fields is related more to high-quality practice than innate ability • Students learn best by writing code, making mistakes, and then seeing how their mistakes and the subsequent fixes relate to the principles. • The overall goal is to reduce complexity; this is more important than any particular principle or idea you read here ## 1 Introduction (It's All About Complexity) • Why? • All programming requires is a creative mind and the ability to organize your thoughts. • This means that the greatest limitation in writing software is our ability to understand the systems we are creating. • The larger the program, and the more people that work on it, the more difficult it is to manage complexity. • How? • Good development tools can help us deal with complexity. But there is a limit to what we can do with tools alone. • simpler designs allow us to build larger and more powerful systems before complexity becomes overwhelming. • There are two general approaches to fighting complexity, 1. eliminate complexity by making code simpler and more obvious. 2. encapsulate it, so that programmers can work on a system without being exposed to all of its complexity at once. (modular design) • Because software is so malleable, software design is a continuous process that spans the entire lifecycle of a software system; • Incremental Development over Waterfall • It isn't possible to visualize the design for a large software system well enough to understand all of its implications before building anything. • The incremental approach works for software because software is malleable enough to allow significant design changes partway through implementation. • Incremental development means that 1. software design is never done. 2. continuous redesign. • This book is about how to use complexity to guide the design of software throughout its lifetime. • This book has two overall goals. 1. describe the nature of software complexity: • what does "complexity" mean • why does it matter • how can you recognize when a program has unnecessary complexity? 2. present techniques you can use during the software development process to minimize complexity. • there isn't a simple recipe that will guarantee great software designs. • a collection of higher-level concepts that border on the philosophical, • These concepts may not immediately identify the best design, but you can use them to compare design alternatives and guide your exploration of the design space. ### 1.1 How to use this book • The best way: in conjunction with code reviews • When you read other people’s code, think about whether it conforms to the concepts discussed here and how that relates to the complexity of the code. • It's easier to see design problems in someone else’s code than your own • You can use the red flags described here to identify problems and suggest improvements. • Reviewing code will also expose you to new design approaches and programming techniques. • One of the best ways to improve your design skills is to learn to recognize red flags (code smells): signs that a piece of code is probably more complicated than it needs to be. • Don't give up easily: the more alternatives you try before fixing the problem, the more you will learn. • When applying the ideas from this book, it’s important to use moderation and discretion • Every rule has its exceptions • Every principle has its limits • Beautiful designs reflect a balance between competing ideas and approaches ## 2 The Nature of Complexity • This chapter: 1. understand the enemy at a high level 1. What is "complexity"? 2. Unnecessarily complex? 3. What causes systems to become complex? 2. Lays out some basic assumptions that provide a foundation for the rest of the book • The ability to recognize complexity is a crucial design skill. 1. allows you to identify problems before you invest a lot of effort in them 2. allows you to make good choices among alternatives • It's easier to tell whether a design is simple than it is to create a simple design • Once you can recognize that a system is too complicated, you can use that ability to guide your design philosophy towards simplicity. -> Try a different approach and see if that is simpler. • Over time, you will notice that certain techniques tend to result in simpler designs, while others correlate with complexity. -> This allows you to produce simpler designs more quickly. ### 2.1 Complexity defined • Complexity is anything related to the structure of a software system that makes it hard to understand and modify the system • Complexity can take many forms • Complexity is determined by the activities that are most common (the part that are touched often) $$C=\sum_p{c_p t_p}$$ • The overall complexity of a system (C) is determined by the complexity of each part p (c_p) weighted by the fraction of time developers spend working on that part (t_p). • Isolating complexity in a place where it will never be seen is almost as good as eliminating the complexity entirely. • Complexity is more apparent to readers than writers. • Your job as a developer is not just to create code that you can work with easily, but to create code that others can also work with easily. ### 2.2 Symptoms of complexity 1. Change amplification • A seemingly simple change requires code modifications in many different places. • One of the goals of good design is to reduce the amount of code that is affected by each design decision, so design changes don’t require very many code modifications. • Refers to how much a developer needs to know in order to complete a task. • Why a higher cognitive load is bad: there is a greater risk of bugs because they have missed something important. • Cognitive load arises in many ways, such as • APIs with many methods, • global variables, • inconsistencies, • dependencies between modules. • System designers sometimes assume that complexity can be measured by lines of code • There are costs (of fewer LoCs) associated with cognitive load • Sometimes an approach that requires more lines of code is actually simpler, because it reduces cognitive load. 3. Unknown unknowns • It is not obvious 1. which pieces of code must be modified to complete a task, 2. what information a developer must have to carry out the task successfully • The worst: it is unclear what to do or whether a proposed solution will even work 1. There is something you need to know 2. But there is no way for you to find out what it is, or even whether there is an issue 3. You won't find out about it until bugs appear after you make a change 4. The only way to be certain is to read every line of code in the system, which is impossible for systems of any size • One of the most important goals of good design is for a system to be obvious • This is the opposite of high cognitive load and unknown unknowns • In an obvious system, a developer can 1. quickly understand how the existing code works and what is required to make a change 2. make a quick guess about what to do, without thinking very hard, and yet be confident that the guess is correct • 18 Code Should be Obvious ### 2.3 Causes of complexity • Complexity is caused by two things 1. A dependency exists when a given piece of code cannot be understood and modified in isolation • Dependencies are a fundamental part of software and can’t be completely eliminated. • However, one of the goals of software design is to reduce the number of dependencies and to make the dependencies that remain as simple and obvious as possible. 2. Obscurity occurs when important information is not obvious. • Obscurity is often associated with dependencies, where it is not obvious that a dependency exist • Inconsistency is also a major contributor to obscurity: • However, obscurity is also a design issue • If a system has a clean and obvious design, then it will need less documentation. • The need for extensive documentation is often a red flag that the design isn't quite right. • The best way to reduce obscurity is by simplifying the system design. • Together, dependencies and obscurity account for the three manifestations of complexity • Obscurity creates unknown unknowns, and also contributes to cognitive load • f we can find design techniques that minimize dependencies and obscurity, then we can reduce the complexity of software. ### 2.4 Complexity is incremental • Complexity isn't caused by a single catastrophic error; it accumulates in lots of small chunks • A single dependency or obscurity, by itself, is unlikely to affect significantly the maintainability of a software system. • The incremental nature of complexity makes it hard to control • Once complexity has accumulated, it is hard to eliminate, since fixing a single dependency or obscurity will not, by itself, make a big difference • In order to slow the growth of complexity, you must adopt a "zero tolerance" philosophy ### 2.5 Conclusion Complexity comes from an accumulation of dependencies and obscurities. As complexity increases, it leads to change amplification, a high cognitive load, and unknown unknowns. As a result, it takes more code modifications to implement each new feature. In addition, developers spend more time acquiring enough information to make the change safely and, in the worst case, they can't even find all the information they need. The bottom line is that complexity makes it difficult and risky to modify an existing code base. ## 3 Working Code Isn't Enough • This chapter: 1. If you want a good design, you must take a more strategic approach where you invest time to produce clean designs and fix problems. 2. Why the strategic approach produces better designs and is actually cheaper than the tactical approach over the long run ### 3.1 Tactical programming • In the tactical approach, your main focus is to get something working • Tactical programming makes it nearly impossible to produce a good system design • The problem is that it's short-sighted • Planning for the future isn't a priority • You don't spend much time looking for the best design • You tell yourself that it's OK to add a bit of complexity or introduce a small kludge or two • This is how systems become complicated • Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado ### 3.2 Strategic programming • The first step towards becoming a good software designer is to realize that working code isn't enough. • It's not acceptable to introduce unnecessary complexities in order to finish your current task faster. • The most important thing is the long-term structure of the system. • Your primary goal must be to produce a great design, which also happens to work. • Strategic programming requires an investment mindset. • Invest time to improve the design of the system. • These investments will slow you down a bit in the short term, but they will speed you up in the long term • Proactive investments 1. it's worth taking a little extra time to find a simple design for each new class; rather than implementing the first idea that comes to mind, try a couple of alternative designs and pick the cleanest one. 2. Try to imagine a few ways in which the system might need to be changed in the future and make sure that will be easy with your design. 3. Writing good documentation is another example of a proactive investment. • Reactive investments (No matter how much you invest up front, there will inevitably be mistakes in your design decisions.) • When you discover a design problem, don't just ignore it or patch around it; take a little extra time to fix it. • If you program strategically, you will continually make small improvements to the system design. ### 3.3 How much to invest? • The ideal design tends to emerge in bits and pieces, as you get experience with the system • The best approach is to make lots of small investments on a continual basis • Spend about 10-20% of the total development time on investments • Small enough that it won't impact your schedules significantly • Large enough to produce significant benefits over time • It won't be long before you're developing at least 10–20% faster than you would if you had programmed tactically. • At this point your investments become free: the benefits from your past investments will save enough time to cover the cost of future investments. • Poor code quality slows development by at least 20% ### 3.4 Startups and investment • In some environments (early-stage startups) there are strong forces working against the strategic approach • Once a code base turns to spaghetti, it is nearly impossible to fix. • The payoff for good (or bad) design comes pretty quickly, so there's a good chance that the tactical approach won't even speed up your first product release. • One of the most important factors for success of a company is the quality of its engineers. • The best way to lower development costs is to hire great engineers • The best engineers care deeply about good design. • If your code base is a wreck, word will get out, and this will make it harder for you to recruit. • As a result, you are likely to end up with mediocre engineers. • Facebook changed its motto (from "Move fast and break things") to "Move fast with solid infrastructure" to encourage its engineers to invest more in good design. • Fortunately, it is also possible to succeed in Silicon Valley with a strategic approach. • VMware ### 3.5 Conclusion • Good design doesn't come for free. It has to be something you invest in continually, so that small problems don't accumulate into big • It's crucial to be consistent in applying the strategic approach and to think of investment as something to do today, not tomorrow. ones. Fortunately, good design eventually pays for itself, and sooner than you might think. • The most effective approach is one where every engineer makes continuous small investments in good design. ## 4 Modules Should Be Deep • modular design: • design systems so that developers only need to face a small fraction of the overall complexity at any given time. • One of the most important techniques for managing software complexity • this chapter: basic principles of modular design ### 4.1 Modular design • In modular design, a software system is decomposed into a collection of modules that are relatively independent. • Modules can take many forms, • In an ideal world, each module would be completely independent of the others: • a developer could work in any of the modules without knowing anything about any of the other modules. • In this world, the complexity of a system would be the complexity of its worst module. • Unfortunately, this ideal is not achievable. • The goal of modular design is to minimize the dependencies between modules. • In order to manage dependencies, we think of each module in two parts: 1. Interface • The interface consists of everything that a developer working in a different module must know in order to use the given module. • Typically, the interface describes what the module does but not how it does it. 2. Implementation • The implementation consists of the code that carries out the promises made by the interface. • A developer should not need to understand the implementations of modules other than the one he or she is working in. • The best modules are those whose interfaces are much simpler than their implementations. 1. a simple interface minimizes the complexity that a module imposes on the rest of the system. 2. if a module is modified in a way that does not change its interface, then no other module will be affected by the modification. -> If a module’s interface is much simpler than its implementation, there will be many aspects of the module that can be changed without affecting other modules. ### 4.2 What’s in an interface? • The interface to a module contains two kinds of information: • formal • specified explicitly in the code, • some of these can be checked for correctness by the programming language. • informal • These are not specified in a way that can be understood or enforced by the programming language. • its high-level behavior, • constraints on the usage of a class • an interface described in English is likely to be more intuitive and understandable for developers than one written in a formal specification language. • For most interfaces the informal aspects are larger and more complex than the formal aspects. • One of the benefits of a clearly specified interface is that it indicates exactly what developers need to know in order to use the associated module. -> helps to eliminate the unknown unknowns ### 4.3 Abstractions • An abstraction is a simplified view of an entity, which omits unimportant details. • Abstractions are useful because they make it easier for us to think about and manipulate complex things. • In modular programming, each module provides an abstraction in form of its interface. • The interface presents a simplified view of the module’s functionality; • the details of the implementation are unimportant from the standpoint of the module’s abstraction, so they are omitted from the interface. • the word "unimportant" is crucial. • The more unimportant details that are omitted from an abstraction, the better. • However, a detail can only be omitted from an abstraction if it is unimportant. • An abstraction can go wrong in two ways. 1. it can include details that are not really important; 2. omits details that really are important. • An abstraction that omits important details is a false abstraction: it might appear simple, but in reality it isn’t. • The key to designing abstractions is 1. to understand what is important 2. to look for designs that minimize the amount of information that is important. • We depend on abstractions to manage complexity not just in programming, but pervasively in our everyday lives. ### 4.4 Deep modules • The best modules are deep: they have a lot of functionality hidden behind a simple interface. • Module depth is a way of thinking about cost versus benefit. • The benefit provided by a module is its functionality. • The cost of a module (in terms of system complexity) is its interface. • Interfaces are good, but more, or larger, interfaces are not necessarily better! • Examples • Unix I/O • garbage collectors ### 4.5 Shallow modules   RedFlag • Shallow classes are sometimes unavoidable, but they don’t provide help much in managing complexity. • Small modules tend to be shallow. • Red Flag: Shallow Module ### 4.6 Classitis • The conventional wisdom in programming is that classes should be small, not deep. • The extreme of the "classes should be small" approach is a syndrome I call classitis, • which stems from the mistaken view that "classes are good, so more classes are better." • Classitis may result in classes that are individually simple, but it increases the complexity of the overall system. 1. Small classes don’t contribute much functionality, so there have to be a lot of them, each with its own interface. These interfaces accumulate to create tremendous complexity at the system level. 2. Small classes also result in a verbose programming style, due to the boilerplate required for each class. ### 4.7 Examples: Java and Unix I/O • interfaces should be designed to make the common case as simple as possible • If an interface has many features, but most developers only need to be aware of a few of them, the effective complexity of that interface is just the complexity of the commonly used features. ### 4.8 Conclusion • By separating the interface of a module from its implementation, we can hide the complexity of the implementation from the rest of the system. • Users of a module need only understand the abstraction provided by its interface. • The most important issue in designing classes and other modules is to make them deep, so that they have simple interfaces for the common use cases, yet still provide significant functionality. This maximizes the amount of complexity that is concealed. ## 5 Information Hiding (and Leakage) • Techniques for creating deep modules. ### 5.1 Information hiding • The most important technique for achieving deep modules • On the Criteria To Be Used in Decomposing Systems into Modules • The basic idea is that each module should encapsulate a few pieces of knowledge, which represent design decisions. • The knowledge is embedded in the module's implementation but does not appear in its interface, so it is not visible to other modules. • The information hidden within a module usually consists of details about how to implement some mechanism. • The hidden information includes data structures and algorithms related to the mechanism. • Information hiding reduces complexity in two ways. 1. it simplifies the interface to a module. • The interface reflects a simpler, more abstract view of the module's functionality and hides the details; • this reduces the cognitive load on developers who use the module. 2. information hiding makes it easier to evolve the system. • If a piece of information is hidden, there are no dependencies on that information outside the module containing the information, • so a design change related to that information will affect only the one module. • When designing a new module, you should think carefully about what information can be hidden in that module. • If you can hide more information, you should also be able to simplify the module's interface, and this makes the module deeper. • Hiding variables and methods in a class by declaring them private isn't the same thing as information hiding. • Private elements can help with information hiding • However, information about the private items can still be exposed through public methods. • The best form of information hiding is when information is totally hidden within a module • So that it is irrelevant and invisible to users of the module • Partial information hiding also has value • If a particular feature or piece of information is only needed by a few of a class's users, and it is accessed through separate methods so that it isn't visible in the most common use cases, then that information is mostly hidden. • Such information will create fewer dependencies than information that is visible to every user of the class. ### 5.2 Information leakage   RedFlag • The opposite of information hiding • Information leakage occurs when a design decision is reflected in multiple modules. (when the same knowledge is used in multiple places) • : If a piece of information is reflected in the interface for a module, then by definition it has been leaked; • : Information can be leaked even if it doesn't appear in a module's interface • e.g. two classes both have knowledge of a particular file format • more pernicious than leakage through an interface, because it isn't obvious. • One of the most important red flags in software design. • One of the best skills you can learn as a software designer is a high level of sensitivity to information leakage. • If you encounter information leakage between classes, ask yourself "How can I reorganize these classes so that this particular piece of knowledge only affects a single class?" 1. If the affected classes are relatively small and closely tied to the leaked information, it may make sense to merge them into a single class. 2. Pull the information out of all of the affected classes and create a new class that encapsulates just that information. • However, this approach will be effective only if you can find a simple interface that abstracts away from the details; • if the new class exposes most of the knowledge through its interface, then it won't provide much value (you've simply replaced back-door leakage with leakage through an interface). ### 5.3 Temporal decomposition   RedFlag Consider an application that reads a file in a particular format, modifies the contents of the file, and then writes the file out again. With temporal decomposition, this application might be broken into three classes: one to read the file, another to perform the modifications, and a third to write out the new version. Both the file which results in information leakage. The solution is to combine the core mechanisms for reading and writing files into a single class. This class will get used during both the reading and writing phases of the application. • A design style • In temporal decomposition, the structure of a system corresponds to the time order in which operations will occur • It's easy to fall into the trap of temporal decomposition, because the order in which operations must occur is often on your mind when you code. • Most design decisions manifest themselves at several different times over the life of the application; as a result, temporal decomposition often results in information leakage. • Order usually does matter, so it will be reflected somewhere in the application • However, it shouldn't be reflected in the module structure • Unless that structure is consistent with information hiding (perhaps the different stages use totally different information). • When designing modules, focus on the knowledge that's needed to perform each task, not the order in which tasks occur. ### 5.4 Example: HTTP server • The students in the course were asked to implement one or more classes to make it easy for Web servers to receive incoming HTTP requests and send responses. #### 5.5 Example: too many classes • The most common mistake made by students was to divide their code into a large number of shallow classes, which led to information leakage between the classes. • Information hiding can often be improved by making a class slightly larger. 1. Bring together all of the code related to a particular capability • (such as parsing an HTTP request), • so that the resulting class contains everything related to that capability. 2. Raise the level of the interface • Of course, it is possible to take the notion of larger classes too far (such as a single class for the entire application). 9 Better Together Or Better Apart? #### 5.6 Example: HTTP parameter handling • It's important to avoid exposing internal data structures • Example: • bad: getParams • better: getParameter • even better?: getIntParameter • This saves the caller from having to request string-to-integer conversion separately, and hides that mechanism from the caller. • Additional methods for other data types, such as getDoubleParameter, could be defined if needed. • (All of these methods will throw exceptions if the desired parameter doesn't exist, or if it can't be converted to the requested type; the exception declarations have been omitted in the code above). #### 5.7 Example: defaults in HTTP responses • Interfaces should be designed to make the common case as simple as possible. • Whenever possible, classes should "do the right thing" without being explicitly asked. • Red Flag: Overexposure   RedFlag If the API for a commonly used feature forces users to learn about other features that are rarely used, this increases the cognitive load on users who don’t need the rarely used features. ### 5.8 Information hiding within a class 1. Try to design the private methods within a class so that each method encapsulates some information or capability and hides it from the rest of the class. 2. In addition, try to minimize the number of places where each instance variable is used. • if you can reduce the number of places where a variable is used, you will eliminate dependencies within the class and reduce its complexity. ### 5.9 Taking it too far • Information hiding only makes sense when the information being hidden is not needed outside its module. • If the information is needed outside the module, then you must not hide it. • As a software designer, your goal should be to minimize the amount of information needed outside a module • But, it's important to recognize which information is needed outside a module and make sure it is exposed. ### 5.10 Conclusion • Information hiding and deep modules are closely related. • If a module hides a lot of information, that tends to increase the amount of functionality provided by the module while also reducing its interface. This makes the module deeper • Conversely, if a module doesn't hide much information, then either it doesn't have much functionality, or it has a complex interface; either way, the module is shallow. • When decomposing a system into modules, try not to be influenced by the order in which operations will occur at runtime; that will lead you down the path of temporal decomposition, which will result in information leakage and shallow modules. • Instead, think about the different pieces of knowledge that are needed to carry out the tasks of your application, and design each module to encapsulate one or a few of those pieces of knowledge. • This will produce a clean and simple design with deep modules. ## 6 General-Purpose Modules are Deeper • The general-purpose approach seems consistent with the investment mindset discussed in Chapter 3, where you spend a bit more time up front to save time later on. • The special-purpose approach seems consistent with an incremental approach to software development. ### 6.1 Make classes somewhat general-purpose • In my experience, the sweet spot is to implement new modules in a somewhat general-purpose fashion. • The module's functionality should reflect your current needs, but its interface should not. • Instead, the interface should be general enough to support multiple uses. • The most important (and perhaps surprising) benefit of the general-purpose approach is that it results in simpler and deeper interfaces than a special-purpose approach. • The general-purpose approach can also save you time in the future, if you reuse the class for other purposes. • However, even if the module is only used for its original purpose, the general-purpose approach is still better because of its simplicity. ### 6.2 Example: storing text for an editor void backspace(Cursor cursor); void delete(Cursor cursor); void deleteSelection(Selection selection); • Each new user interface operation required a new method to be defined in the text class, so a developer working on the user interface was likely to end up working on the text class as well. • One of the goals in class design is to allow each class to be developed independently, but the specialized approach tied the user interface and text classes together. #### 6.3 A more general-purpose API void insert(Position position, String newText); void delete(Position start, Position end); Position changePosition(Position position, int numChars); text.delete(cursor, text.changePosition(cursor, 1)); // delete text.delete(text.changePosition(cursor, -1), cursor); // backspace #### 6.4 Generality leads to better information hiding • One of the most important elements of software design is determining who needs to know what, and when. • When the details are important, it is better to make them explicit and as obvious as possible, • Hiding this information behind an interface just creates obscurity. ### 6.5 Questions to ask yourself • What is the simplest interface that will cover all my current needs? • If you reduce the number of methods in an API without reducing its overall capabilities, then you are probably creating more general-purpose methods • Reducing the number of methods makes sense only as long as the API for each individual method stays simple; if you have to introduce lots of additional arguments in order to reduce the number of methods, then you may not really be simplifying things. • In how many situations will this method be used? • Is this API easy to use for my current needs? • This question can help you to determine when you have gone too far in making an API simple and general-purpose. • If you have to write a lot of additional code to use a class for your current purpose, that’s a red flag that the interface doesn’t provide the right functionality ### 6.6 Conclusion • General-purpose interfaces have many advantages over special-purpose ones. • They tend to be simpler, with fewer methods that are deeper. • They also provide a cleaner separation between classes, whereas special-purpose interfaces tend to leak information between classes. • Making your modules somewhat general-purpose is one of the best ways to reduce overall system complexity. ## 7 Different Layer, Different Abstraction • If a system contains adjacent layers with similar abstractions, this is a red flag that suggests a problem with the class decomposition. • This chapter discusses situations where this happens, the problems that result, and how to refactor to eliminate the problems. ### 7.1 Pass-through methods • When adjacent layers have similar abstractions, the problem often manifests itself in the form of pass-through methods. • A pass-through method is one that does little except invoke another method, whose signature is similar or identical to that of the calling method. • This typically indicates that there is not a clean division of responsibility between the classes. • Pass-through methods make classes shallower: • they increase the interface complexity of the class, which adds complexity, • but they don't increase the total functionality of the system. • Pass-through methods also create dependencies between classes: • the interface to a piece of functionality should be in the same class that implements the functionality. • consider the two classes and ask yourself "Exactly which features and abstractions is each of these classes responsible for?" • The solution is to refactor the classes so that each class has a distinct and coherent set of responsibilities. ### 7.2 When is interface duplication OK? • Having methods with the same signature is not always bad. • The important thing is that each new method should contribute significant functionality. • Pass-through methods are bad because they contribute no new functionality. • One example is a dispatcher. • A dispatcher is a method that uses its arguments to select one of several other methods to invoke; then it passes most or all of its arguments to the chosen method. • the dispatcher provides useful functionality: it chooses which of several other methods should carry out each task. • Another example is interfaces with multiple implementations, • When several methods provide different implementations of the same interface, it reduces cognitive load. • Once you have worked with one of these methods, it's easier to work with the others, since you don't need to learn a new interface. • Methods like this are usually in the same layer and they don't invoke each other. ### 7.3 Decorators • The motivation for decorators is to separate special-purpose extensions of a class from a more generic core. • However, decorator classes tend to be shallow: they introduce a large amount of boilerplate for a small amount of new functionality. • Decorator classes often contain many pass-through methods. • It's easy to overuse the decorator pattern, creating a new class for every small new feature. • Before creating a decorator class, consider alternatives such as the following: • Could you add the new functionality directly to the underlying class, rather than creating a decorator class? • This makes sense if • the new functionality is relatively general-purpose, • it is logically related to the underlying class, • most uses of the underlying class will also use the new functionality. • If the new functionality is specialized for a particular use case, would it make sense to merge it with the use case, rather than creating a separate class? • Could you merge the new functionality with an existing decorator, rather than creating a new decorator? • This would result in a single deeper decorator class rather than multiple shallow ones. • ask yourself whether the new functionality really needs to wrap the existing functionality: • could you implement it as a stand-alone class that is independent of the base class? ### 7.4 Interface versus implementation • The interface of a class should normally be different from its implementation: the representations used internally should be different from the abstractions that appear in the interface. • If the two have similar abstractions, then the class probably isn't very deep. • The difference represents valuable functionality provided by the class. ### 7.5 Pass-through variables • a variable that is passed down through a long chain of methods. • Pass-through variables add complexity because 1. they force all of the intermediate methods to be aware of their existence, even though the methods have no use for the variables. 2. Furthermore, if a new variable comes into existence, you may have to modify a large number of interfaces and methods to pass the variable through all of the relevant paths. • Eliminating pass-through variables can be challenging. 1. see if there is already an object shared between the topmost and bottommost methods. • However, if there is such an object, then it may itself be a pass-through variable 2. store the information in a global variable, • but global variables almost always create other problems. 3. introduce a context object (The solution I use most often) • A context stores all of the application's global state (anything that would otherwise be a pass-through variable or global variable). • The context allows multiple instances of the system to coexist in a single process, each with its own context. • Unfortunately, the context will probably be needed in many places, so it can potentially become a pass-through variable. • To reduce the number of methods that must be aware of it, a reference to the context can be saved in most of the system's major objects. • With this approach, the context is available everywhere, but it only appears as an explicit argument in constructors. • The context object unifies the handling of all system-global information and eliminates the need for pass-through variables. • The context makes it easy to identify and manage the global state of the system, since it is all stored in one place. • The context is also convenient for testing: test code can change the global configuration of the application by modifying fields in the context. • Contexts are far from an ideal solution. • The variables stored in a context have most of the disadvantages of global variables; • Without discipline, a context can turn into a huge grab-bag of data that creates nonobvious dependencies throughout the system. • Contexts may also create thread-safety issues; • the best way to avoid problems is for variables in a context to be immutable. • Unfortunately, I haven't found a better solution than contexts. ### 7.6 Conclusion • In order for an element to provide a net gain against complexity, it must eliminate some complexity that would be present in the absence of the design element. • Otherwise, you are better off implementing the system without that particular element. • The "different layer, different abstraction" rule is just an application of this idea: ## 8 Pull Complexity Downwards • This chapter • introduces another way of thinking about how to create deeper classes. • It is more important for a module to have a simple interface than a simple implementation. • Most modules have more users than developers, so it is better for the developers to suffer than the users. • As a module developer, you should strive to make life as easy as possible for the users of your module, even if that means extra work for you. ### 8.1 Example: editor text class • A character-oriented interface such as the one described in Section 6.3 pulls complexity downward. • This approach is better because it encapsulates the complexity of splitting and merging within the text class, which reduces the overall complexity of the system. ### 8.2 Example: configuration parameters • Configuration parameters are an example of moving complexity upwards instead of down. • an easy excuse to avoid dealing with important issues and pass them on to someone else. • In many cases, it's difficult or impossible for users or administrators to determine the right values for the parameters. • In other cases, the right values could have been determined automatically with a little extra work in the system implementation. • Configuration parameters can easily become out of date. • Before exporting a configuration parameter, ask yourself: “will users (or higher-level modules) be able to determine a better value than we can determine here?” • Ideally, each module should solve a problem completely; configuration parameters result in an incomplete solution, which adds to system complexity. ### 8.3 Taking it too far • This is an idea that can easily be overdone. • Pulling complexity down makes the most sense if 1. the complexity being pulled down is closely related to the class's existing functionality, 2. pulling the complexity down will result in many simplifications elsewhere in the application, and 3. pulling the complexity down simplifies the class's interface. • Remember that the goal is to minimize overall system complexity. ### 8.4 Conclusion • When developing a module, look for opportunities to take a little bit of extra suffering upon yourself in order to reduce the suffering of your users. ## 9 Better Together Or Better Apart? • One of the most fundamental questions in software design is this: given two pieces of functionality, should they be implemented together in the same place, or should their implementations be separated? • This question applies at all levels in a system, such as functions, methods, classes, and services. • This chapter discusses the factors to consider when making these decisions. • When deciding whether to combine or separate, the goal is to reduce the complexity of the system as a whole and improve its modularity. • The act of subdividing creates additional complexity that was not present before subdivision: • Some complexity comes just from the number of components: • the more components, the harder to keep track of them all and the harder to find a desired component within the large collection. • Subdivision usually results in more interfaces, and every new interface adds complexity. • Subdivision can result in additional code to manage the components. • Subdivision creates separation: • the subdivided components will be farther apart than they were before subdivision. • Separation makes it harder for developers to see the components at the same time, or even to be aware of their existence. • If the components are truly independent, then separation is good: it allows the developer to focus on a single component at a time, without being distracted by the other components. • On the other hand, if there are dependencies between the components, then separation is bad: developers will end up flipping back and forth between the components. • Even worse, they may not be aware of the dependencies, which can lead to bugs. • Subdivision can result in duplication: code that was present in a single instance before subdivision may need to be present in each of the subdivided components. • Here are a few indications that two pieces of code are related: • They share information; • They are used together: anyone using one of the pieces of code is likely to use the other as well. • They overlap conceptually, in that there is a simple higher-level category that includes both of the pieces of code. • It is hard to understand one of the pieces of code without looking at the other. ### 9.1 Bring together if information is shared • Section 5.4 introduced this principle in the context of a project implementing an HTTP server. • Because of this shared information, it is better to both read and parse the request in the same place; when the two classes were combined into one, the code got shorter and simpler. ### 9.2 Bring together if it will simplify the interface • This often happens when the original modules each implement part of the solution to a problem. • In addition, when the functionality of two or more classes is combined, it may be possible to perform some functions automatically, so that most users need not be aware of them. ### 9.3 Bring together to eliminate duplication • If you find the same pattern of code repeated over and over, see if you can reorganize the code to eliminate the repetition. • Approaches 1. Factor the repeated code out into a separate method and replace the repeated code snippets with calls to the method. • This approach is most effective if the repeated code snippet is long and the replacement method has a simple signature. • If the snippet interacts in complex ways with its environment (such as by accessing numerous local variables), then the replacement method might require a complex signature (such as many pass-by-reference arguments), which would reduce its value. 2. Refactor the code so that the snippet in question only needs to be executed in one place. • goto ### 9.4 Separate general-purpose and special-purpose code • If a module contains a mechanism that can be used for several different purposes, then it should provide just that one general-purpose mechanism. • It should not include code that specializes the mechanism for a particular use, nor should it contain other general-purpose mechanisms. • Special-purpose code associated with a general-purpose mechanism should normally go in a different module (typically one associated with the particular purpose). • This approach eliminated information leakage and additional interfaces • In general, the lower layers of a system tend to be more general-purpose and the upper layers more special-purpose. • The way to separate special-purpose code from general-purpose code is to 1. pull the special-purpose code upwards, into the higher layers, 2. leaving the lower layers general-purpose. #### Red Flag: Repetition   RedFlag • If the same piece of code (or code that is almost the same) appears over and over again, that's a red flag that you haven't found the right abstractions. ### 9.5 Example: insertion cursor and selection #### Red Flag: Special-General Mixture   RedFlag • This red flag occurs when a general-purpose mechanism also contains code specialized for a particular use of that mechanism. • This makes the mechanism more complicated and creates information leakage between the mechanism and the particular use case: future modifications to the use case are likely to require changes to the underlying mechanism as well. ### 9.6 Example: separate class for logging • This separation added complexity with no benefit. • The logging methods were shallow: • most consisted of a single line of code, but they required a considerable amount of documentation. • Each method was only invoked in a single place. • The logging methods were highly dependent on their invocations: • someone reading the invocation would most likely flip over to the logging method to make sure that the right information was being logged; • similarly, someone reading the logging method would probably flip over to the invocation site to understand the purpose of the method. ### 9.7 Example: editor undo mechanism • The key design decision was the one that separated the general-purpose part of the undo mechanism from the special-purpose parts and put the general-purpose part in a class by itself. Once that was done, the rest of the design fell out naturally. • Note: the suggestion to separate general-purpose code from special-purpose code refers to code related to a particular mechanism. • For example, special-purpose undo code (such as code to undo a text insertion) should be separated from general-purpose undo code (such as code to manage the history list). • However, it often makes sense to combine special-purpose code for one mechanism with general-purpose code for another. • The text class is an example of this: • it implements a general-purpose mechanism for managing text, • but it includes special-purpose code related to undoing. • The undo code is special-purpose because it only handles undo operations for text modifications. • It doesn't make sense to combine this code with the general-purpose undo infrastructure in the History class, but it does make sense to put it in the text class, since it is closely related to other text functions. ### 9.8 Splitting and joining methods • Length by itself is rarely a good reason for splitting up a method. • In general, developers tend to break up methods too much. • Splitting up a method introduces additional interfaces, which add to complexity. • You shouldn't break up a method unless it makes the overall system simpler; • Long methods aren't always bad. • For example, suppose a method contains five 20-line blocks of code that are executed in order. • If the blocks are relatively independent, then the method can be read and understood one block at a time; there's not much benefit in moving each of the blocks into a separate method. • If the blocks have complex interactions, it's even more important to keep them together so readers can see all of the code at once; • If each block is in a separate method, readers will have to flip back and forth between these spread-out methods in order to understand how they work together. • Methods containing hundreds of lines of code are fine if they have a simple signature and are easy to read. These methods are deep (lots of functionality, simple interface), which is good. • When designing methods, the most important goal is to provide clean and simple abstractions. • Each method should do one thing and do it completely. • The method should have a clean and simple interface, so that users don't need to have much information in their heads in order to use it correctly. • The method should be deep: its interface should be much simpler than its implementation. • If a method has all of these properties, then it probably doesn't matter whether it is long or not. • Splitting up a method only makes sense if it results in cleaner abstractions, overall. • A method can be split either 1. (The best way) by extracting a subtask • This form of subdivision makes sense if there is a subtask that is cleanly separable from the rest of the original method, • which means 1. someone reading the child method doesn't need to know anything about the parent method 2. someone reading the parent method doesn't need to understand the implementation of the child method. • Typically this means that the child method is relatively general-purpose: it could conceivably be used by other methods besides the parent. • If you make a split of this form and then find yourself flipping back and forth between the parent and child to understand how they work together, that is a red flag ("Conjoined Methods") indicating that the split was probably a bad idea. 2. by dividing its functionality into two separate methods. • This makes sense if the original method had an overly complex interface because it tried to do multiple things that were not closely related. • Ideally, most callers should only need to invoke one of the two new methods; • if callers must invoke both of the new methods, then that adds complexity, which makes it less likely that the split is a good idea. • The new methods will be more focused in what they do. • It is a good sign if the new methods are more general-purpose than the original method (i.e., you can imagine using them separately in other situations). • (This approach) don't make sense very often, because they result in callers having to deal with multiple methods instead of one. • When you split this way, you run the risk of ending up with several shallow methods, • If the caller has to invoke each of the separate methods, passing state back and forth between them, then splitting is not a good idea. • judge it based on whether it simplifies things for callers. • A method should not be split if it results in shallow methods, as in. • There are also situations where a system can be made simpler by joining methods together. • it (joining methods) might replace two shallow methods with one deeper method; • it might eliminate duplication of code; • it might eliminate dependencies between the original methods, or intermediate data structures; • it might result in better encapsulation, so that knowledge that was previously present in multiple places is now isolated in a single place; or • it might result in a simpler interface, #### Red Flag: Conjoined Methods • It should be possible to understand each method independently. • If you can't understand the implementation of one method without also understanding the implementation of another, that's a red flag. • This red flag can occur in other contexts as well: if two pieces of code are physically separated, but each can only be understood by looking at the other, that is a red flag. ### 9.9 Conclusion • The decision to split or join modules should be based on complexity. • Pick the structure that results in • the best information hiding, • the fewest dependencies, • the deepest interfaces. ## 10 Define Errors Out Of Existence • Exception handling is one of the worst sources of complexity in software systems. • Code that deals with special conditions is inherently harder to write than code that deals with normal cases, • developers often define exceptions without considering how they will be handled. • This chapter discusses • why exceptions contribute disproportionately to complexity • how to simplify exception handling • The key overall lesson from this chapter is to reduce the number of places where exceptions must be handled; • in many cases the semantics of operations can be modified so that the normal behavior handles all situations and there is no exceptional condition to report (hence the title of this chapter). ### 10.1 Why exceptions add complexity • exception: any uncommon condition that alters the normal flow of control in a program. • Formal exception mechanism (try-catch) • Informal exception mechanism (returning special values) • When an exception occurs, the programmer can deal with it in two ways, each of which can be complicated. 1. move forward and complete the work in progress in spite of the exception. 2. abort the operation in progress and report the exception upwards. • aborting can be complicated because the exception may have occurred at a point where system state is inconsistent • the exception handling code must restore consistency, such as by unwinding any changes made before the exception occurred. • Furthermore, exception handling code creates opportunities for more exceptions. • Secondary exceptions occurring during recovery are often more subtle and complex than the primary exceptions. • To prevent an unending cascade of exceptions, the developer must eventually find a way to handle exceptions without introducing more exceptions. • Language support for exceptions tends to be verbose and clunky, which makes exception handling code hard to read. • It’s difficult to ensure that exception handling code really works. • Some exceptions, such as I/O errors, can’t easily be generated in a test environment, so it’s hard to test the code that handles them. • Exceptions don’t occur very often in running systems, so exception handling code rarely executes. • Bugs can go undetected for a long time, and when the exception handling code is finally needed, there’s a good chance that it won’t work • “code that hasn’t been executed doesn’t work” • When exception handling code fails, it’s difficult to debug the problem, since it occurs so infrequently. ### 10.2 Too many exceptions • Programmers exacerbate the problems related to exception handling by defining unnecessary exceptions. • Most programmers are taught that it’s important to detect and report errors; they often interpret this to mean “the more errors detected, the better.” • This leads to an over-defensive style where anything that looks even a bit suspicious is rejected with an exception, which results in a proliferation of unnecessary exceptions that increase the complexity of the system. • It’s tempting to use exceptions to avoid dealing with difficult situations: rather than figuring out a clean way to handle it, just throw an exception and punt the problem to the caller. • if you are having trouble figuring out what to do for the particular situation, there’s a good chance that the caller won’t know what to do either. • The exceptions thrown by a class are part of its interface; classes with lots of exceptions have complex interfaces, and they are shallower than classes with fewer exceptions. • It can propagate up through several stack levels before being caught, so it affects not just the method’s caller, but potentially also higher-level callers (and their interfaces). • Throwing exceptions is easy; handling them is hard. • The best way to reduce the complexity damage caused by exception handling is to reduce the number of places where exceptions have to be handled. • The rest of this chapter will discuss four techniques for reducing the number of exception handlers. ### 10.3 Define errors out of existence • The best way to eliminate exception handling complexity is to define your APIs so that there are no exceptions to handle: define errors out of existence. • I should have changed the definition of unset slightly: rather than deleting a variable, unset should ensure that a variable no longer exists. #### 10.4 Example: file deletion in Windows • Delaying the file deletion defines errors out of existence. #### 10.5 Example: Java substring method • If errors are defined out of existence, won’t that result in buggier software? • The error-ful approach may catch some bugs, but it also increases complexity, which results in other bugs. • In the error-ful approach, developers must write additional code to avoid or ignore the errors, and this increases the likelihood of bugs • or, they may forget to write the additional code, in which case unexpected errors may be thrown at runtime • In contrast, defining errors out of existence simplifies APIs and it reduces the amount of code that must be written. • Overall, the best way to reduce bugs is to make software simpler. • With this approach (exception masking), an exceptional condition is detected and handled at a low level in the system, so that higher levels of software need not be aware of the condition • Exception masking doesn’t work in all situations, but it is a powerful tool in the situations where it works. • It results in deeper classes, since 1. it reduces the class’s interface (fewer exceptions for users to be aware of) 2. adds functionality in the form of the code that masks the exception. • Exception masking is an example of pulling complexity downward. (8 Pull Complexity Downwards) ### 10.7 Exception aggregation • The idea behind exception aggregation is to handle many exceptions with a single piece of code; • rather than writing distinct handlers for many individual exceptions, • handle them all in one place with a single handler. • A generally-useful design pattern for exception handling. • If a system processes a series of requests, it’s useful to define an exception that aborts the current request, cleans up the system’s state, and continues with the next request. • The exception is caught in a single place near the top of the system’s request-handling loop. • This exception can be thrown at any point in the processing of a request to abort the request; different subclasses of the exception can be defined for different conditions. • Exceptions of this type should be clearly distinguished from exceptions that are fatal to the entire system. • Exception aggregation works best if an exception propagates several levels up the stack before it is handled; • this allows more exceptions from more methods to be handled in the same place. • This is the opposite of exception masking: • masking usually works best if an exception is handled in a low-level method. • For masking, the low-level method is typically a library method used by many other methods, so allowing the exception to propagate would increase the number of places where it is handled • One way of thinking about exception aggregation is that it replaces several special-purpose mechanisms, each tailored for a particular situation, with a single general-purpose mechanism that can handle multiple situations ### 10.8 Just crash? • In most applications there will be certain errors that it’s not worth trying to handle. 1. difficult or impossible to handle 2. don’t occur very often • Whether or not it is acceptable to crash on a particular error depends on the application. ### 10.9 Design special cases out of existence • For the same reason that it makes sense to define errors out of existence, it also makes sense to define other special cases out of existence • Special cases can result in code that is riddled with if statements, which make the code hard to understand and lead to bugs. • The best way to do this is by designing the normal case in a way that automatically handles the special cases without any extra code. • 7 Different Layer, Different Abstraction The notion of “no selection” makes sense in terms of how the user thinks about the application’s interface, but that doesn’t mean it has to be represented explicitly inside the application. Having a selection that always exists, but is sometimes empty and thus invisible, results in a simpler implementation. ### 10.10 Taking it too far • Defining away exceptions, or masking them inside a module, only makes sense if the exception information isn’t needed outside the module. • in the rare situations where a caller cares about the special cases detected by the exceptions, there are other ways for it to get this information. • With exceptions, as with many other areas in software design, you must determine what is important and what is not important. • Things that are not important should be hidden, and the more of them the better. • But when something is important, it must be exposed. ### 10.11 Conclusion • Together, these techniques can have a significant impact on overall system complexity. ## 11 Design it Twice • Designing software is hard, so it’s unlikely that your first thoughts about how to structure a module or system will produce the best design. • You’ll end up with a much better result if you consider multiple options for each major design decision: design it twice. • You don’t need to pin down every feature of each alternative; it’s sufficient at this point to sketch out a few of the most important methods. • Try to pick approaches that are radically different from each other; you’ll learn more that way. • Even if you are certain that there is only one reasonable approach, consider a second design anyway, no matter how bad you think it will be. • It will be instructive to think about the weaknesses of that design and contrast them with the features of other designs. • After you have roughed out the designs for the alternatives, make a list of the pros and cons of each one. • The most important consideration for an interface is ease of use for higher level software. • It is also worth considering other factors: • Does one alternative have a simpler interface than another? • Is one interface more general-purpose than another? • Does one interface enable a more efficient implementation than another? • Once you have compared alternative designs, you will be in a better position to identify the best design. • The best choice may be one of the alternatives, • or you may discover that you can combine features of multiple alternatives into a new design that is better than any of the original choices. • Sometimes none of the alternatives is particularly attractive; when this happens, see if you can come up with additional schemes • Use the problems you identified with the original alternatives to drive the new design(s) • The design-it-twice principle can be applied at many levels in a system • Designing it twice does not need to take a lot of extra time • The initial design experiments will probably result in a significantly better design, which will more than pay for the time spent designing it twice. • The design-it-twice principle is sometimes hard for really smart people to embrace. • if you want to get really great results, you have to consider a second possibility, or perhaps a third, no matter how smart you are. • The design of large software systems falls in this category: no-one is good enough to get it right with their first try. • It isn’t that you aren’t smart; it’s that the problems are really hard • Furthermore, that's a good thing: it’s much more fun to work on a difficult problem where you have to think carefully, rather than an easy problem where you don’t have to think at all. • The design-it-twice approach not only improves your designs, but it also improves your design skills. • The process of devising and comparing multiple approaches will teach you about the factors that make designs better or worse. • Over time, this will make it easier for you to rule out bad designs and hone in on really great ones. ## 12 Why Write Comments? The Four Excuses • In-code documentation plays a crucial role in software design. • Comments are essential to help developers understand a system and work efficiently, • Documentation also plays an important role in abstraction; without comments, you can't hide complexity. • the process of writing comments, if done correctly, will actually improve a system's design. • I hope these chapters will convince you of three things: 1. good comments can make a big difference in the overall quality of software; 2. it isn't hard to write good comments; 3. (this may be hard to believe) writing comments can actually be fun. ### 12.1 Good code is self-documenting • Nonetheless, there is still a significant amount of design information that can't be represented in code. • The informal aspects of an interface, such as a high-level description of what each method does or the meaning of its result, can only be described in comments. • the rationale for a particular design decision, • the conditions under which it makes sense to call a particular method. • Some developers argue that if others want to know what a method does, they should just read the code of the method: this will be more accurate than any comment. • It's possible that a reader could deduce the abstract interface of the method by reading its code, but it would be time-consuming and painful. • In addition, if you write code with the expectation that users will read method implementations, you will try to make each method as short as possible, so that it's easy to read. If the method does anything nontrivial, you will break it up into several smaller methods. This will result in a large number of shallow methods. • Furthermore, it doesn't really make the code easier to read: in order to understand the behavior of the top-level method, readers will probably need to understand the behaviors of the nested methods. • For large systems it isn't practical for users to read the code to learn the behavior. • Moreover, comments are fundamental to abstractions. • If users must read the code of a method in order to use it, then there is no abstraction: • Without comments, the only abstraction of a method is its declaration, which specifies its name and the names and types of its arguments and results. • The declaration is missing too much essential information to provide a useful abstraction by itself. • Comments allow us to capture the additional information that callers need, thereby completing the simplified view while hiding implementation details. • It's also important that comments are written in a human language such as English; this makes them less precise than code, but it provides more expressive power, so we can create simple, intuitive descriptions. ### 12.2 I don't have time to write comments • However, software projects are almost always under time pressure, and there will always be things that seem higher priority than writing comments. • Thus, if you allow documentation to be de-prioritized, you'll end up with no documentation. • The counter-argument to this excuse is the investment mindset • If you want a clean software structure, which will allow you to work efficiently over the long-term, then you must take some extra time up front in order to create that structure. • Furthermore, writing comments needn't take a lot of time. • Furthermore, many of the most important comments are those related to abstractions, such as the top-level documentation for classes and methods. • these comments should be written as part of the design process, (chapter 15) • the act of writing the documentation serves as an important design tool that improves the overall design. These comments pay for themselves immediately. • Keeping documentation up-to-date does not require an enormous effort. • Chapter 16 discusses how to organize documentation so that it is as easy as possible to keep it updated after code modifications • (the key ideas are to avoid duplicated documentation and keep the documentation close to the corresponding code). • Large changes to the documentation are only required if there have been large changes to the code, and the code changes will take more time than the documentation changes. • Code reviews provide a great mechanism for detecting and fixing stale comments. ### 12.4 All the comments I have seen are worthless • writing solid documentation is not hard, once you know how. • The next chapters will lay out a framework for how to write good documentation and maintain it over time. ### 12.5 Benefits of well-written comments • The overall idea behind comments is to capture information that was in the mind of the designer but couldn't be represented in the code. • When other developers come along later to make modifications, the comments will allow them to work more quickly and accurately. • Without documentation, future developers will have to rederive or guess at the developer's original knowledge; • this will take additional time, • and there is a risk of bugs if the new developer misunderstands the original designer's intentions. • Comments are valuable even when the original designer is the one making the changes: if it has been more than a few weeks since you last worked in a piece of code, you will have forgotten many of the details of the original design. • Good documentation helps with the last two of these issues (describes in chapter 2) • Documentation can reduce cognitive load by providing developers with the information they need to make changes and by making it easy for developers to ignore information that is irrelevant. • Documentation can also reduce the unknown unknowns by clarifying the structure of the system, so that it is clear what information and code is relevant for any given change. ## 13 Comments Should Describe Things that Aren't Obvious from the Code • The reason for writing comments is that 1. statements in a programming language can't capture all of the important information that was in the mind of the developer when the code was written. 2. Comments record this information so that developers who come along later can easily understand and modify the code. 3. The guiding principle for comments is that comments should describe things that aren't obvious from the code. • There are many things that aren't obvious from the code. • low-level details • Why code is needed • Why it was implemented in a particular way • Rules the developer followed • One of the most important reasons for comments is abstractions, which include a lot of information that isn't obvious from the code. • The idea of an abstraction is to provide a simple way of thinking about something, • but code is so detailed that it can be hard to see the abstraction just from reading the code. • Developers should be able to understand the abstraction provided by a module without reading any code other than its externally visible declarations. • The only way to do this is by supplementing the declarations with comments. • This chapter discusses • what information needs to be described in comments • how to write good comments. • As you will see, good comments typically explain things at a different level of detail than the code, which is • more detailed in some situations • less detailed (more abstract) in others. ### 13.1 Pick conventions • conventions for commenting, such as • what you will comment • the format you will use for comments. • If you are programming in a language for which there exists a document compilation tool, • follow the conventions of the tools. • None of these conventions is perfect, • but the tools provide enough benefits to make up for that. • If you are programming in an environment where there are no existing conventions to follow, • try to adopt the conventions from some other language or project that is similar; • this will make it easier for other developers to understand and adhere to your conventions. • Conventions serve two purposes. 1. they ensure consistency, 2. they help to ensure that you actually write comments. • If you don't have a clear idea what you are going to comment and how, it's easy to end up writing no comments at all. • Most comments fall into one of the following categories: a comment block that immediately precedes the declaration of a module such as a class, data structure, function, or method. The comment describe's the module's interface. • For a class, the comment describes the overall abstraction provided by the class. • For a method or function, the comment describes its overall behavior, its arguments and return value, if any, any side effects or exceptions that it generates, and any other requirements the caller must satisfy before invoking the method. a comment next to the declaration of a field in a data structure, a comment inside the code of a method or function, which describes how the code works internally. a comment describing dependencies that cross module boundaries. • The most important comments are those in the first two categories. (Interface & Data structure member) • Every class should have an interface comment, • every class variable should have a comment, • every method should have an interface comment. • it is easier to comment everything rather than spend energy worrying about whether a comment is needed. • Implementation comments are often unnecessary (see Section 13.6 below). • Cross-module comments are the most rare of all and they are problematic to write, but when they are needed they are quite important; Section 13.7 discusses them in more detail. ### 13.2 Don't repeat the code • The most common reason is that the comments repeat the code: • After you have written a comment, ask yourself the following question: could someone who has never seen the code write the comment just by looking at the code next to the comment? • Another common mistake is to use the same words in the comment that appear in the name of the entity being documented: • These comments just take the words from the method or variable name, perhaps add a few words from argument names and types, and form them into a sentence. • A first step towards writing good comments is to *use different words in the comment from those in the name of the entity being described*. • Pick words for the comment that provide additional information about the meaning of the entity, • rather than just repeating its name. • example /* * The amount of blank space to leave on the left and right sides of * each line of text, in pixels. */ private static final int textHorizontalPadding = 4; • This comment provides additional information that is not obvious from the declaration itself, such as the units (pixels) and the fact that padding applies to both sides of each line. #### Red Flag: Comment Repeats Code   RedFlag • If the information in a comment is already obvious from the code next to the comment, then the comment isn't helpful. • One example of this is when the comment uses the same words that make up the name of the thing it is describing. • Comments augment the code by providing information at a different level of detail. • by clarifying the exact meaning of the code • offer intuition • the reasoning behind the code • a simpler and more abstract way of thinking about the code • Comments at the same level as the code are likely to repeat the code (Red Flag: Comment Repeats Code) • Precision is most useful when commenting variable declarations • such as • class instance variables • method arguments • return values • The name and type in a variable declaration are typically not very precise • Comments can fill in missing details such as: • What are the units for this variable? • Are the boundary conditions inclusive or exclusive? • If a null value is permitted, what does it imply? • If a variable refers to a resource that must eventually be freed or closed, who is responsible for freeing or closing it? • Are there certain properties that are always true for the variable (invariants), such as “this list always contains at least one entry”? • Some of this information could potentially be figured out by examining all of the code where the variable is used. • However, this is time-consuming and error-prone; • the declaration's comment should be clear and complete enough to make this unnecessary • When documenting a variable, think nouns, not verbs. • Focus on what the variable represents, not how it is manipulated. • Examples • verbs (how it is manipulated) /* FOLLOWER VARIABLE: indicator variable that allows the Receiver and * has been received within the follower's election timeout window. * Toggled to TRUE when a valid heartbeat is received. * Toggled to FALSE when the election timeout window is reset. */ • nouns (what it represents) /* True means that a heartbeat has been received since the last time * the election timer was reset. Used for communication between the ### 13.4 Higher-level comments enhance intuition • They omit details and help the reader to understand the overall intent and structure of the code. • This approach is commonly used for comments inside methods, and for interface comments. • Higher-level comments are more difficult to write than lower-level comments because you must think about the code in a different way. • What is this code trying to do? • What is the simplest thing you can say that explains everything in the code? • Engineers tend to be very detail-oriented. • We love details and are good at managing lots of them; this is essential for being a good engineer. • But, great software designers can also step back from the details and think about a system at a higher level. • This means 1. deciding which aspects of the system are most important, 2. and being able to ignore the low-level details and think about the system only in terms of its most fundamental characteristics • This is the essence of abstraction (finding a simple way to think about a complex entity), • And it's also what you must do when writing higher-level comments. • A good higher-level comment expresses one or a few simple ideas that provide a conceptual framework, • Given the framework, it becomes easy to see how specific code statements relate to the overall goal. • Comments of the form "how we get here" are very useful for helping people to understand code. • it explains (in high level terms) why the code is executed. ### 13.5 Interface documentation • One of the most important roles for comments is to define abstractions • 4.3 Abstractions • Code isn't suitable for describing abstractions 1. it's too low level 2. it includes implementation details that shouldn't be visible in the abstraction • If you want code that presents good abstractions, you must document those abstractions with comments. • The first step in documenting abstractions is to separate interface comments from implementation comments. • Differences • Interface comments provide information that someone needs to know in order to use a class or method; they define the abstraction. • Implementation comments describe how a class or method works internally in order to implement the abstraction. • If interface comments must also describe the implementation, then the class or method is shallow. • The act of writing comments can provide clues about the quality of a design • (15.3 Comments are a design tool) • The interface comment for a method includes both higher-level information for abstraction and lower-level details for precision • The comment usually starts with a sentence or two describing the behavior of the method as perceived by callers; this is the higher-level abstraction. • The comment must describe each argument and the return value (if any). These comments must be very precise, and must describe any constraints on argument values as well as dependencies between arguments. • If the method has any side effects, these must be documented in the interface comment. • A method's interface comment must describe any exceptions that can emanate from the method. • If there are any preconditions that must be satisfied before a method is invoked, these must be described. It is a good idea to minimize preconditions, but any that remain must be documented. • It can be helpful to have examples in the class documentation that illustrate how its methods work together, particularly for deep classes with usage patterns that are non-obvious. • Some of the implementation documentation is useful, but it should go inside the method, where it will be clearly separated from interface documentation #### Red Flag: Implementation Documentation Contaminates Interface   RedFlag This red flag occurs when interface documentation, such as that for a method, describes implementation details that aren't needed in order to use the thing being documented. ### 13.6 Implementation comments: what and why, not how • Most methods are so short and simple that they don't need any implementation comments: given the code and the interface comments, it's easy to figure out how a method works • The main goal of implementation comments is to help readers understand what the code is doing (not how it does it). • In addition to describing what the code is doing, implementation comments are also useful to explain why. • For longer methods, it can be helpful to write comments for a few of the most important local variables. However, most local variables don't need documentation if they have good names. • If the variable is used over a large span of code, then you should consider adding a comment to describe the variable. When documenting variables, focus on what the variable represents, not how it is manipulated in the code. ### 13.7 Cross-module design decisions • The biggest challenge with cross-module documentation is finding a place to put it where it will naturally be discovered by developers. • Sometimes there is an obvious central place to put such documentation. • Unfortunately, in many cases there is not an obvious central place to put cross-module documentation • One possibility is to duplicate parts of the documentation in each location that depends on it. However, this is awkward, and it is difficult to keep such documentation up to date as the system evolves • Alternatively, the documentation can be located in one of the places where it is needed, but in this case it's unlikely that developers will see the documentation or know where to look for it. • I have recently been experimenting with an approach where cross-module issues are documented in a central file called designNotes. • The file is divided up into clearly labeled sections, one for each major topic. • Then, in any piece of code that relates to one of these issues there is a short comment referring to the designNotes file: • However, this has the disadvantage that the documentation is not near any of the pieces of code that depend on it, so it may be difficult to keep up-to-date as the system evolves. ### 13.8 Conclusion • The goal of comments is to ensure that the structure and behavior of the system is obvious to readers, so they can quickly find the information they need and make modifications to the system with confidence that they will work. • Some of this information can be represented in the code in a way that will already be obvious to readers, but there is a significant amount of information that can't easily be deduced from the code. • Comments fill in this information. • When writing comments, try to put yourself in the mindset of the reader and ask yourself what are the key things he or she will need to know • If your code is undergoing review and a reviewer tells you that something is not obvious, • don't argue with them; if a reader thinks it's not obvious, then it's not obvious. • Instead of arguing, try to understand what they found confusing and see if you can clarify that, either with better comments or better code. ## 14 Choosing Names • Good names are a form of documentation: • they make code easier to understand. • They reduce the need for other documentation and make it easier to detect errors. • Conversely, poor name choices • increase the complexity of code • create ambiguities and misunderstandings that can result in bugs. • Name choice is an example of the principle that complexity is incremental. • Choosing a mediocre name for a particular variable, as opposed to the best possible name, probably won’t have much impact on the overall complexity of a system. • However, software systems have thousands of variables; choosing good names for all of these will have a significant impact on complexity and manageability. ### 14.1 Example: bad names cause bugs • The problem was actually quite simple (as are most bugs, once you figure them out). • It took a long process of instrumentation, which eventually showed that the corruption must be happening in a particular statement, before I was able to get past the mental block created by the name and check to see exactly where its value came from. • Unfortunately, most developers don’t spend much time thinking about names. • They tend to use the first name that comes to mind, as long as it’s reasonably close to matching the thing it names. • Take a bit of extra time to choose great names, which are precise, unambiguous, and intuitive. • The extra attention will pay for itself quickly, • and over time you’ll learn to choose good names quickly. ### 14.2 Create an image • When choosing a name, the goal is to create an image in the mind of the reader about the nature of the thing being named. • A good name conveys a lot of information about what the underlying entity is, and, just as important, what it is not. • When considering a particular name, ask yourself: “If someone sees this name in isolation, without seeing its declaration, its documentation, or any code that uses the name, how closely will they be able to guess what the name refers to? Is there some other name that will paint a clearer picture?” • Names are a form of abstraction: • they provide a simplified way of thinking about a more complex underlying entity. • Like other forms of abstraction, the best names are those that focus attention on what is most important about the underlying entity while omitting details that are less important. ### 14.3 Names should be precise • Good names have two properties: 1. precision 2. consistency • The most common problem with names is that they are too generic or vague; • as a result, it’s hard for readers to tell what the name refers to; • the reader may assume that the name refers to something different from reality, • Like all rules, the rule about choosing precise names has a few exceptions. 1. If you can see the entire range of usage of a variable, then the meaning of the variable will probably be obvious from the code so you don’t need a long name. 2. It’s also possible for a name to be too specific, 3. If you find it difficult to come up with a name for a particular variable that is precise, intuitive, and not too long, this is a red flag. • It suggests that the variable may not have a clear definition or purpose. • When this happens, consider alternative factorings. • The process of choosing good names can improve your design by identifying weaknesses. #### Red Flag: Vague Name   RedFlag • If a variable or method name is broad enough to refer to many different things, then • it doesn’t convey much information to the developer • the underlying entity is more likely to be misused. #### Red Flag: Hard to Pick Name   RedFlag If it’s hard to find a simple name for a variable or method that creates a clear image of the underlying object, that’s a hint that the underlying object may not have a clean design. ### 14.4 Use names consistently • In any program there are certain variables that are used over and over again. • Consistent naming reduces cognitive load in much the same way as reusing a common class: • once the reader has seen the name in one context, they can reuse their knowledge and instantly make assumptions when they see the name in a different context. • Consistency has three requirements: 1. always use the common name for the given purpose 2. never use the common name for anything other than the given purpose 3. make sure that the purpose is narrow enough that all variables with the name have the same behavior • Sometimes you will need multiple variables that refer to the same general sort of thing • When this happens, use the common name for each variable but add a distinguishing prefix, such as srcFileBlock and dstFileBlock. • Loops are another area where consistent naming can help • If you use names such as i and j for loop variables, always use i in outermost loops and j for nested loops. • This allows readers to make instant (safe) assumptions about what’s happening in the code when they see a given name. ### 14.5 A different opinion: Go style guide • Some of the developers of the Go language argue that names should be very short, often only a single character • In a presentation on name choice for Go, Andrew Gerrand states that “long names obscure what the code does.” • The Go culture encourages the use of the same short name for multiple different things: ch for character or channel, d for data, difference, or distance, and so on. To me, ambiguous names like these are likely to result in confusion and error, just as in the block example. • Overall, I would argue that readability must be determined by readers, not writers. • Gerrand makes one comment that I agree with: “The greater the distance between a name’s declaration and its uses, the longer the name should be.” ### 14.6 Conclusion • Well chosen names help to make code more obvious; • when someone encounters the variable for the first time, their first guess about its behavior, made without much thought, will be correct. • Choosing good names is an example of the investment mindset discussed in Chapter 3: • if you take a little extra time up front to select good names, it will be easier for you to work on the code in the future. • In addition, you will be less likely to introduce bugs. • Developing a skill for naming is also an investment • When you first decide to stop settling for mediocre names, you may find it frustrating and time-consuming to come up with good names. • However, as you get more experience you’ll find that it becomes easier; eventually, you’ll get to the point where it takes almost no extra time to choose good names, so you will get the benefits almost for free. ## 15 Write The Comments First • The best time to write comments is at the beginning of the process, as you write the code. 1. Writing the comments first makes documentation part of the design process. 2. Not only does this produce better documentation, but it also produces better designs and it makes the process of writing documentation more enjoyable. 1. Delaying documentation often means that it never gets written at all. (LeBlanc's Law / On Agile: Why You Won't Fix It Later) • Once you start delaying, it's easy to delay a bit more; after all, the code will be even more stable in a few more weeks. • By the time the code has inarguably stabilized, there is a lot of it, which means the task of writing documentation has become huge and even less attractive. • There's never a convenient time to stop for a few days and fill in all of the missing comments, • it's easy to rationalize that the best thing for the project is to move on and fix bugs or write the next new feature. • This will create even more undocumented code. 2. Even if you do have the self-discipline to go back and write the comments (and don't fool yourself: you probably don't), the comments won't be very good. • By this time in the process, you have checked out mentally. • In your mind, this piece of code is done; • you are eager to move on to your next project. • You know that writing comments is the right thing to do, but it's no fun. • You just want to get through it as quickly as possible. • Thus, you make a quick pass over the code, adding just enough comments to look respectable. • By now, it's been a while since you designed the code, so your memories of the design process are becoming fuzzy. • You look at the code as you are writing the comments, so the comments repeat the code. • Even if you try to reconstruct the design ideas that aren't obvious from the code, there will be things you don't remember. • Thus, the comments are missing some of the most important things they should describe. ### 15.2 Write the comments first • A different approach • For a new class, I start by writing the class interface comment. • Next, I write interface comments and signatures for the most important public methods, but I leave the method bodies empty. • I iterate a bit over these comments until the basic structure feels about right. • At this point I write declarations and comments for the most important class instance variables in the class. • Finally, I fill in the bodies of the methods, adding implementation comments as needed. • While writing method bodies, I usually discover the need for additional methods and instance variables. For each new method I write the interface comment before the body of the method; for instance variables I fill in the comment at the same time that I write the variable declaration. • The comments-first approach has three benefits. • If you write the comments as you are designing the class, the key design issues will be fresh in your mind, so it's easy to record them. • It's better to write the interface comment for each method before its body, so you can focus on the method's abstraction and interface without being distracted by its implementation. • During the coding and testing process you will notice and fix problems with the comments. As a result, the comments improve over the course of development. 2. it improves the system design. (The most important) 3. it makes comment-writing more fun. ### 15.3 Comments are a design tool • Comments provide the only way to fully capture abstractions, and good abstractions are fundamental to good system design. • If you write comments describing the abstractions at the beginning, you can review and tune them before writing implementation code. • To write a good comment, you must identify the essence of a variable or piece of code: what are the most important aspects of this thing? It's important to do this early in the design process; otherwise you are just hacking code. • Comments serve as a canary in the coal mine of complexity. • If a method or variable requires a long comment, it is a red flag that you don't have a good abstraction. • The best way to judge the complexity of an interface is from the comments that describe it. • If the interface comment for a method provides all the information needed to use the method and is also short and simple, that indicates that the method has a simple interface. • Conversely, if there's no way to describe a method completely without a long and complicated comment, then the method has a complex interface. • You can compare a method's interface comment with the implementation to get a sense of how deep the method is: if the interface comment must describe all the major features of the implementation, then the method is shallow. • The same idea applies to variables: if it takes a long comment to fully describe a variable, it's a red flag that suggests you may not have chosen the right variable decomposition. • Of course, comments are only a good indicator of complexity if they are complete and clear. #### Red Flag: Hard to Describe • The comment that describes a method or variable should be simple and yet complete. • If you find it difficult to write such a comment, that's an indicator that there may be a problem with the design of the thing you are describing. • Finding simple comments is a source of pride. • The comments are how I record and test the quality of my design decisions. • Looking for the design that can be expressed completely and clearly in the fewest words. • The simpler the comments, the better I feel about my design, • If you are programming strategically, where your main goal is a great design rather than just writing code that works, then writing comments should be fun, since that's how you identify the best designs. ### 15.5 Are early comments expensive? • Delaying the comments until the end will save only a fraction of this, which isn't very much. 2. Writing the comments first will mean that the abstractions will be more stable before you start writing code. ### 15.6 Conclusion If you haven't ever tried writing the comments first, give it a try. Stick with it long enough to get used to it. Then think about how it affects the quality of your comments, the quality of your design, and your overall enjoyment of software development. ## 16 Modifying Existing Code • A large software system develops through a series of evolutionary stages, where each stage adds new capabilities and modifies existing modules. • This means that a system's design is constantly evolving. • It isn't possible to conceive the right design for a system at the outset; • the design of a mature system is determined more by changes made during the system's evolution than by any initial conception. • this chapter discusses how to keep complexity from creeping in as the system evolves. ### 16.1 Stay strategic • This idea (from chapter 3) also applies when you are modifying existing code. • Unfortunately, when developers go into existing code to make changes such as bug fixes or new features, they don't usually think strategically. • A typical mindset is "what is the smallest possible change I can make that does what I need?" • Sometimes developers justify this because they are not comfortable with the code being modified; they worry that larger changes carry a greater risk of introducing new bugs. • However, this results in tactical programming. • Each one of these minimal changes introduces a few special cases, dependencies, or other forms of complexity. • As a result, the system design gets just a bit worse, and the problems accumulate with each step in the system's evolution. • If you want to maintain a clean design for a system, you must take a strategic approach when modifying existing code. • *Ideally, when you have finished with each change, the system will have the structure it would have had if you had designed it from the start with that change in mind.* • To achieve this goal, • you must resist the temptation to make a quick fix. • Instead, think about whether the current system design is still the best one, in light of the desired change. • If not, refactor the system so that you end up with the best possible design. • This is also an example of the investment mindset • Even if your particular change doesn't require refactoring, you should still be on the lookout for design imperfections that you can fix while you're in the code. • Whenever you modify any code, try to find a way to improve the system design at least a little bit in the process. • If you're not making the design better, you are probably making it worse. • an investment mindset sometimes conflicts with the realities of commercial software development. • Nonetheless, you should resist these compromises as much as possible. • Ask yourself "Is this the best I can possibly do to create a clean system design, given my current constraints?" • Perhaps there's an alternative approach that would be almost as clean as the 3-month refactoring but could be done in a couple of days? • Or, if you can't afford to do a large refactoring now, get your boss to allocate time for you to come back to it after the current deadline. • Every development organization should plan to spend a small fraction of its total effort on cleanup and refactoring; this work will pay for itself over the long run. • When you change existing code, there's a good chance that the changes will invalidate some of the existing comments. • with a little discipline and a couple of guiding rules, it's possible to keep comments up-to-date without a huge effort. • The best way to ensure that comments get updated is to position them close to the code they describe, • so developers will see them when they change the code. • The farther a comment is from its associated code, the less likely it is that it will be updated properly. • users should not need to read either code or header files; they should get their information from documentation compiled by tools • Given tools such as these, the documentation should be located in the place that is most convenient for developers working on the code. • When writing implementation comments, don't put all the comments for an entire method at the top of the method. • Spread them out, pushing each comment down to the narrowest scope that includes all of the code referred to by the comment. • In general, the farther a comment is from the code it describes, the more abstract it should be • this reduces the likelihood that the comment will be invalidated by code changes ### 16.3 Comments belong in the code, not the commit log • A common mistake when modifying code is to put detailed information about the change in the commit message for the source code repository, but then not to document it in the code. • Although commit messages can be browsed in the future by scanning the repository's log, • a developer who needs the information is unlikely to think of scanning the repository log. • Even if they do scan the log, it will be tedious to find the right log message. • When writing a commit message, ask yourself whether developers will need to use that information in the future. • If so, then document this information in the code. • If you want to include a copy of this information in the commit message as well, that's fine, but the most important thing is to get it in the code. • This illustrates the principle of placing documentation in the place where developers are most likely to see it; the commit log is rarely that place. ### 16.4 Maintaining comments: avoid duplication • The second technique for keeping comments up to date is to avoid duplication. • If documentation is duplicated, it is more difficult for developers to find and update all of the relevant copies. • Instead, try to document each design decision exactly once. • If there are multiple places in the code that are affected by a particular decision, don't repeat the documentation at each of these points. • Find the most obvious single place to put the documentation. • If there is no "obvious" single place to put a particular piece of documentation where developers will find it, • create a designNotes file as described in Section 13.7. • Or, pick the best of the available places and put the documentation there. • If the reference becomes obsolete because the master comment was moved or deleted, this inconsistency will be self-evident because developers won't find the comment at the indicated place; they can use revision control history to find out what happened to the comment and then update the reference. • In contrast, if the documentation is duplicated and some of the copies don't get updated, there will be no indication to developers that they are using stale information. • Don't redocument one module's design decisions in another module. • don't put comments before a method call that explain what happens in the called method. • If readers want to know, they should look at the interface comments for the method. • Good development tools will usually provide this information automatically, • If information is already documented someplace outside your program, don't repeat the documentation inside the program; just reference the external documentation. • It's important that readers can easily find all the documentation needed to understand your code, but that doesn't mean you have to write all of that documentation. ### 16.5 Maintaining comments: check the diffs • One good way to make sure documentation stays up to date is to 1. take a few minutes before committing a change to your revision control system to scan over all the changes for that commit; 2. make sure that each change is properly reflected in the documentation. • These pre-commit scans will also detect several other problems, such as accidentally leaving debugging code in the system or failing to fix TODO items. ### 16.6 Higher-level comments are easier to maintain • comments are easier to maintain if they are higher-level and more abstract than the code. • These comments do not reflect the details of the code, • so they will not be affected by minor code changes; • only changes in overall behavior will affect these comments. • in general, the comments that are most useful (they don't simply repeat the code) are also easiest to maintain. ## 17 Consistency • If a system is consistent, it means that 1. similar things are done in similar ways 2. dissimilar things are done in different ways • Consistency creates cognitive leverage: once you have learned how something is done in one place, you can use that knowledge to immediately understand other places that use the same approach. • If a system is not implemented in a consistent fashion, developers must learn about each situation separately. This will take more time. • Consistency reduces mistakes. • If a system is not consistent, two situations may appear the same when in fact they are different. A developer may see a pattern that looks familiar and make incorrect assumptions based on previous encounters with that pattern. • On the other hand, if the system is consistent, assumptions made based on familiar-looking situations will be safe. Consistency allows developers to work more quickly with fewer mistakes. ### 17.1 Examples of consistency • Consistency can be applied at many levels in a system; • Names • Coding style • Interfaces • Design patterns • Invariants ### 17.2 Ensuring consistency • consistency is hard to maintain • A few tips for establishing and maintaining consistency • Document • Create a document that lists the most important overall conventions • Place the document in a spot where developers are likely to see it • Encourage new people joining the group to read the document • Encourage existing people to review it every once in a while • For conventions that are more localized, find an appropriate spot in the code to document them • Enforce • The best way to enforce conventions is to write a tool that checks for violations • Make sure that code cannot be committed to the repository unless it passes the checker • Code reviews provide another opportunity for enforcing conventions and for educating new developers about the conventions • The more nit-picky that code reviewers are, the more quickly everyone on the team will learn the conventions, and the cleaner the code will be • When in Rome • When working in a new file, look around to see how the existing code is structured • When making a design decision, • ask yourself if it's likely that a similar decision was made elsewhere in the project • if so, find an existing example and use the same approach in your new code • Don't change existing conventions • Having a "better idea" is not a sufficient excuse to introduce inconsistencies • The value of consistency over inconsistency is almost always greater than the value of one approach over another • Before introducing inconsistent behavior, ask yourself two questions 1. Do you have significant new information justifying your approach that wasn't available when the old convention was established? 2. Is the new approach so much better that it is worth taking the time to update all of the old uses? • When you are done (upgrading from the old convention to the new convention) • There should be no sign of the old convention • However, you still run the risk that other developers will not know about the new convention, so they may reintroduce the old approach in the future • Overall, reconsidering establishes conventions is rarely a good use of developer time ### 17.3 Taking it too far • If you become overzealous about consistency and try to force dissimilar things into the same approach, you'll create complexity and confusion • Consistency only provides benefits when developers have confidence that "if it looks like an x, it really is an x" ### 17.4 Conclusion • Consistency is another example of the investment mindset • It will take a bit of extra work to ensure consistency • Work to decide on conventions • Work to crate automated checkers • Work to look for similar situations to mimic in new code • Work in code reviews to educate the team • The return on this investment is that your code will be more obvious • Developers will be able to understand the code's behavior more quickly and accurately • This will allow them to work faster, with fewer bugs ## 18 Code Should be Obvious • The solution to the obscurity problem (Section 2.3) is to write code in a way that makes it obvious; • this chapter discusses some of the factors that make code more or less obvious. • If code is obvious, it means that • someone can read the code quickly, without much thought, and their first guesses about the behavior or meaning of the code will be correct. • a reader doesn't need to spend much time or effort to gather all the information they need to work with the code. • If code is not obvious, then a reader must expend a lot of time and energy to understand it. • Not only does this reduce their efficiency, • but it also increases the likelihood of misunderstanding and bugs. • Obvious code needs fewer comments than nonobvious code. • "Obvious" is in the mind of the reader: • it's easier to notice that someone else's code is nonobvious than to see problems with your own code. • Thus, the best way to determine the obviousness of code is through code reviews. • If someone reading your code says it's not obvious, then it's not obvious, no matter how clear it may seem to you. • By trying to understand what made the code nonobvious, you will learn how to write better code in the future. ### 18.1 Things that make code more obvious • Two of the most important techniques for making code obvious have already been discussed in previous chapters. 1. The first is choosing good names • Precise and meaningful names clarify the behavior of the code and reduce the need for documentation. • If a name is vague or ambiguous, then readers will have read through the code in order to deduce the meaning of the named entity; this is time-consuming and error-prone. 2. consistency (Chapter 17). • If similar things are always done in similar ways, then readers can recognize patterns they have seen before and immediately draw (safe) conclusions without analyzing the code in detail. • Here are a few other general-purpose techniques for making code more obvious: • Judicious use of white space. • The way code is formatted can impact how easy it is to understand. • Sometimes it isn't possible to avoid code that is nonobvious. When this happens, it's important to use comments to compensate by providing the missing information. • To do this well, you must put yourself in the position of the reader and figure out what is likely to confuse them, and what information will clear up that confusion. ### 18.2 Things that make code less obvious Some of these, such as event-driven programming, are useful in some situations, so you may end up using them anyway. When this happens, extra documentation can help to minimize reader confusion. • Event-driven programming. • Event-driven programming makes it hard to follow the flow of control. • To compensate for this obscurity, use the interface comment for each handler function to indicate when it is invoked, • Generic containers. • Many languages provide generic classes for grouping two or more items into a single object, such as Pair in Java or std::pair in C++. • These classes are tempting because they make it easy to pass around several objects with a single variable. • Unfortunately, generic containers result in nonobvious code because the grouped elements have generic names that obscure their meaning. • Thus, it's better not to use generic containers. • If you need a container, define a new class or structure that is specialized for the particular use. • You can then use meaningful names for the elements, and you can provide additional documentation in the declaration, which is not possible with the generic container. • a general rule: software should be designed for ease of reading, not ease of writing. • Different types for declaration and allocation. • Code that violates reader expectations. • Code is most obvious if it conforms to the conventions that readers will be expecting; • if it doesn't, then it's important to document the behavior so readers aren't confused. #### Red Flag: Nonobvious Code   RedFlag • If the meaning and behavior of code cannot be understood with a quick reading, it is a red flag. • Often this means that there is important information that is not immediately clear to someone reading the code. ### 18.3 Conclusion • Another way of thinking about obviousness is in terms of information. • If code is nonobvious, that usually means there is important information about the code that the reader does not have: • To make code obvious, you must ensure that readers always have the information they need to understand it. • You can do this in three ways. 1. The best way is to reduce the amount of information that is needed, using design techniques such as abstraction and eliminating special cases. 2. take advantage of information that readers have already acquired in other contexts (for example, by following conventions and conforming to expectations) so readers don't have to learn new information for your code. 3. present the important information to them in the code, using techniques such as good names and strategic comments. ## 19 Software Trends • This chapter: • several trends and patterns that have become popular in software development over the last few decades • For each trend: • how that trend relates to the principles in this book • use the principles to evaluate whether that trend provides leverage against software complexity. ### 19.1 Object-oriented programming and inheritance • One of the key elements of object-oriented programming is inheritance. • Inheritance comes in two forms, which have different implications for software complexity 1. Interface inheritance • Interface inheritance provides leverage against complexity by reusing the same interface for multiple purposes • It allows knowledge acquired in solving one problem to be used to solve other problems. Knowledge acquired in solving one problem such as how to use an I/O interface to read and write disk files Solve other problems such as communicating over a network socket • The more different implementations there are of an interface, the deeper the interface becomes • In order for an interface to have many implementations, it must (this notion is at the heart of abstraction.) 1. capture the essential features of all the underlying implementations 2. while steering clear of the details that differ between the implementations; 2. Implementation inheritance • Without implementation inheritance, the same method implementation might need to be duplicated in several subclasses, which would create dependencies between those subclasses • Thus, implementation inheritance reduces the amount of code that needs to be modified as the system evolves • However, implementation inheritance creates dependencies between the parent class and each of its subclasses • this results in information leakage between the classes in the inheritance hierarchy and makes it hard to modify one class in the hierarchy without looking at the others • In the worst case, programmers will need complete knowledge of the entire class hierarchy underneath the parent class in order to make changes to any of the classes. • Class hierarchies that use implementation inheritance extensively tend to have high complexity. • Thus, implementation inheritance should be used with caution. • Before using implementation inheritance, consider whether an approach based on composition can provide the same benefits (Composition over Inheritance) • If there is no viable alternative to implementation inheritance, try to separate the state managed by the parent class from that managed by subclasses • This applies the notion of information hiding within the class hierarchy to reduce dependencies. • Although the mechanisms provided by object-oriented programming can assist in implementing clean designs, they do not, by themselves, guarantee good design ### 19.2 Agile development • One of the most important elements of agile development is the notion that development should be incremental and iterative. • 1 Introduction (It's All About Complexity) • The best way to end up with a good design is to develop a system in increments, where each increment adds a few new abstractions and refactors existing abstractions based on experience • One of the risks of agile development is that it can lead to tactical programming • Agile development tends to focus developers on features, not abstractions, and it encourages developers to put off design decisions in order to produce working software as soon as possible • For example, some agile practitioners argue that you shouldn’t implement general-purpose mechanisms right away; implement a minimal special-purpose mechanism to start with, and refactor into something more generic later, once you know that it’s needed. • Although these arguments make sense to a degree, they argue against an investment approach, and they encourage a more tactical style of programming. • This can result in a rapid accumulation of complexity. • Developing incrementally is generally a good idea, but the increments of development should be abstractions, not features. • It’s fine to put off all thoughts about a particular abstraction until it’s needed by a feature. • Once you need the abstraction, • invest the time to design it cleanly; • follow the advice of Chapter 6 and make it somewhat general-purpose. ### 19.3 Unit tests • Tests, particularly unit tests, play an important role in software design because they facilitate refactoring • Without a test suite, it’s dangerous to make major structural changes to a system • There’s no easy way to find bugs, so it’s likely that bugs will go undetected until the new code is deployed, where they are much more expensive to find and fix. • As a result, developers avoid refactoring in systems without good test suites; • they try to minimize the number of code changes for each new feature or bug fix, • which means that complexity accumulates and design mistakes don’t get corrected. • With a good set of tests, developers can be more confident when refactoring • because the test suite will find most bugs that are introduced. • This encourages developers to make structural improvements to a system, which results in a better design. • Unit tests are particularly valuable: they provide a higher degree of code coverage than system tests, so they are more likely to uncover any bugs. ### 19.4 Test-driven development • The problem with test-driven development is that it focuses attention on getting specific features working, rather than finding the best design. • This is tactical programming pure and simple, with all of its disadvantages • Test-driven development is too incremental: at any point in time, it’s tempting to just hack in the next feature to make the next test pass. • There’s no obvious time to do design, so it’s easy to end up with a mess. • The units of development should be abstractions, not features. (19.2 Agile development) • Once you discover the need for an abstraction, • don’t create the abstraction in pieces over time; • design it all at once (or at least enough to provide a reasonably comprehensive set of core functions). • This is more likely to produce a clean design whose pieces fit together well. • One place where it makes sense to write the tests first is when fixing bugs. • Before fixing a bug, write a unit test that fails because of the bug. • Then fix the bug and make sure that the unit test now passes. • This is the best way to make sure you really have fixed the bug. • If you fix the bug before writing the test, it’s possible that the new unit test doesn’t actually trigger the bug, in which case it won’t tell you whether you really fixed the problem. ### 19.5 Design patterns • Design patterns represent an alternative to design: rather than designing a new mechanism from scratch, just apply a well-known design pattern. • For the most part, this is good: design patterns arose because 1. they solve common problems, 2. they are generally agreed to provide clean solutions. • The greatest risk with design patterns is over-application. • Not every problem can be solved cleanly with an existing design pattern; • Don’t try to force a problem into a design pattern when a custom approach will be cleaner • Using design patterns doesn’t automatically improve a software system; it only does so if the design patterns fit. • As with many ideas in software design, the notion that design patterns are good doesn’t necessarily mean that more design patterns are better. ### 19.6 Getters and setters • Getters and setters are shallow methods (typically only a single line), so they add clutter to the class’s interface without providing much functionality. • It’s better to avoid getters and setters (or any exposure of implementation data) as much as possible. • One of the risks of establishing a design pattern is that developers assume the pattern is good and try to use it as much as possible. • This has led to overusage of getters and setters in Java. ### 19.7 Conclusion • Whenever you encounter a proposal for a new software development paradigm, challenge it from the standpoint of complexity: • does the proposal really help to minimize complexity in large software systems? • Many proposals sound good on the surface, but if you look more deeply you will see that some of them make complexity worse, not better. ## 20 Designing for Performance • This chapter discusses • What if you are working on a system that needs to be fast? • How should performance considerations affect the design process? • how to achieve high performance without sacrificing clean design. • The most important idea is still simplicity: • not only does simplicity improve a system's design, • but it usually makes systems faster. ### 20.1 How to think about performance • How much should you worry about performance during the normal development process? • If you try to optimize every statement for maximum speed, • it will slow down development and create a lot of unnecessary complexity. • Furthermore, many of the “optimizations” won't actually help performance. • On the other hand, if you completely ignore performance issues, • it's easy to end up with a large number of significant inefficiencies spread throughout the code; • the resulting system can easily be 5–10x slower than it needs to be. • The best approach is something between these extremes, where you use basic knowledge of performance to choose design alternatives that are “naturally efficient” yet also clean and simple. • The key is to develop an awareness of which operations are fundamentally expensive • Once you have a general sense for what is expensive and what is cheap, you can use that information to choose cheap operations whenever possible • In many cases, a more efficient approach will be just as simple as a slower approach. • If the only way to improve efficiency is by adding complexity, then the choice is more difficult • If the more efficient design adds only a small amount of complexity, and if the complexity is hidden, so it doesn't affect any interfaces, then it may be worthwhile (but beware: complexity is incremental). • If the faster design adds a lot of implementation complexity, or if it results in more complicated interfaces, then it may be better to start off with the simpler approach and optimize later if performance turns out to be a problem. • However, if you have clear evidence that performance will be important in a particular situation, then you might as well implement the faster approach immediately. • In general, simpler code tends to run faster than complex code. • If you have defined away special cases and exceptions, then no code is needed to check for those cases and the system runs faster. • Deep classes are more efficient than shallow ones, because they get more work done for each method call. Shallow classes result in more layer crossings, and each layer crossing adds overhead. ### 20.2 Measure before modifying • Programmers' intuitions about performance are unreliable • If you start making changes based on intuition, you'll waste time on things that don't actually improve performance, and you'll probably make the system more complicated in the process. • Before making any changes, measure the system's existing behavior 1. the measurements will identify the places where performance tuning will have the biggest impact. • You'll need to measure deeper to identify in detail the factors that contribute to overall performance; • the goal is to identify a small number of very specific places where the system is currently spending a lot of time, and where you have ideas for improvement 2. provide a baseline, so that you can re-measure performance after making your changes to ensure that performance actually improved • If the changes didn't make a measurable difference in performance, then back them out (unless they made the system simpler). • There's no point in retaining complexity unless it provides a significant speedup. ### 20.3 Design around the critical path • The best way to improve its performance is with a “fundamental” change, such as introducing a cache, or using a different algorithmic approach • Unfortunately, situations will sometimes arise where there isn't a fundamental fix • This brings us to the core issue for this chapter, which is how to redesign an existing piece of code so that it runs faster. • The key idea is to design the code around the critical path. • Start off by asking yourself what is the smallest amount of code that must be executed to carry out the desired task in the common case • The ideal code probably clashes with your existing class structure, and it may not be practical, but it provides a good target: this represents the simplest and fastest that the code can ever be. • The next step is to look for a new design that comes as close as possible to the ideal while still having a clean structure • One of the most important things that happens in this process is to remove special cases from the critical path • When code is slow, it's often because it must handle a variety of situations, and the code gets structured to simplify the handling of all the different cases. • Ideally, there will be a single if statement at the beginning, which detects all special cases with one test. In the normal case, only this one test will need to be made, after which the the critical path can be executed with no additional tests for special cases • Performance isn't as important for special cases, so you can structure the special-case code for simplicity rather than performance. ### 20.5 Conclusion • clean design and high performance are compatible. • Complicated code tends to be slow because it does extraneous or redundant work. • On the other hand, if you write clean, simple code, your system will probably be fast enough that you don't have to worry much about performance in the first place. • In the few cases where you do need to optimize performance, the key is simplicity again: find the critical paths that are most important for performance and make them as simple as possible. ## 21 Conclusion • This book is about one thing: complexity • Dealing with complexity is the most important challenge in software design. • It is what makes systems hard to build and maintain, • and it often makes them slow as well • Over the course of the book 1. the root causes that lead to complexity 3. general ideas you can use to create simpler software systems 4. the investment mindset needed to produce simple designs • The downside of all these suggestions is that they create extra work in the early stages of a project • Furthermore, if you aren’t used to thinking about design issues, then you will slow down even more while you learn good design techniques. • If the only thing that matters to you is making your current code work as soon as possible, then thinking about design will seem like drudge work that is getting in the way of your real goal. • On the other hand, if good design is an important goal for you, then the ideas in this book should make programming more fun. • Design is a fascinating puzzle: how can a particular problem be solved with the simplest possible structure? • It’s fun to explore different approaches, and it’s a great feeling to discover a solution that is both simple and powerful. • A clean, simple, and obvious design is a beautiful thing. • Furthermore, the investments you make in good design will pay off quickly. • The modules you defined carefully at the beginning of a project will save you time later as you reuse them over and over. • The clear documentation that you wrote six months ago will save you time when you return to the code to add a new feature. • The time you spent honing your design skills will also pay for itself: • as your skills and experience grow, you will find that you can produce good designs more and more quickly. • Good design doesn’t really take much longer than quick-and-dirty design, once you know how. • The reward for being a good designer is that you get to spend a larger fraction of your time in the design phase, which is fun. • Poor designers spend most of their time chasing bugs in complicated and brittle code. • If you improve your design skills, not only will you produce higher quality software more quickly, but the software development process will be more enjoyable.
{}
Oliver Sacks: His Own Life Review, 6th Armoured Division In Brittany, Scunthorpe To Ashby Bus Times, Baroque Period Art, Con Meaning In English, Death In Pratt, Ks, Saipem 7000 Cayman, " /> Recur-sive Neural Tensor Networks take as input phrases of any length. The purpose of this book is to provide recent advances of architectures, The model The recursive neural network and its applications in control theory OutlineRNNs RNNs-FQA RNNs-NEM ... ∙A Neural Network for Factoid Question Answering over Paragraphs ... Bag-of-Words V.S. The main function of the cells is to decide what to keep in mind and what to omit from the memory. Universal approximation capability of RNN over trees has been proved in literature.[10][11]. The structure of the tree is often indicated by the data. Then, we put the cell state through tanh to push the values to be between -1 and 1 and multiply it by the output of the sigmoid gate, so that we only output the parts we decided to. {\displaystyle p_{1,2}=\tanh \left(W[c_{1};c_{2}]\right)}. These neural networks are called Recurrent because this step is carried out for every input. For example if you have a sequence. It is an essential step to represent text with a dense vector for many NLP tasks, such as text classification [Liu, Qiu, and Huang2016] and summarization [See, Liu, and Manning2017]Traditional methods represent text with hand-crafted sparse lexical features, such as bag-of-words and n-grams [Wang and Manning2012, Silva et al.2011] And in the tanh function its gives the weightage to the values which are passed deciding their level of importance(-1 to 1). Specifically, we combined the CNN and RNN in order to propose the CNN-RNN framework that can deepen the understanding of image content and learn the structured features of images and to begin endto-end training of big data in medical image analysis. [1] To understand the activation functions and the math behind it go here. (2009) were able to scale up deep networks to more realistic image sizes. theory and applications M. Bianchini*, M. Maggini, L. Sarti, F. Scarselli Dipartimento di Ingegneria dell’Informazione Universita degli Studi di Siena Via Roma, 56 53100 - Siena (Italy) Abstract In this paper, we introduce a new recursive neural network model able to process directed acyclic graphs with labelled edges. Urban G(1), Subrahmanya N(2), Baldi P(1). 2 Made perfect sense! The diagnosis of blood-related diseases involves the identification and characterization of a patient's blood sample. Extensions to graphs include Graph Neural Network (GNN),[13] Neural Network for Graphs (NN4G),[14] and more recently convolutional neural networks for graphs. ⁡ Recursive Neural Tensor Network (RNTN). Let’s use Recurrent Neural networks to predict the sentiment of various tweets. The first step in the LSTM is to decide which information to be omitted in from the cell in that particular time step. Dropout was employed to reduce over-fitting to the training data. Type of neural network which utilizes recursion, "Parsing Natural Scenes and Natural Language with Recursive Neural Networks", "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", https://en.wikipedia.org/w/index.php?title=Recursive_neural_network&oldid=994091818, Creative Commons Attribution-ShareAlike License, This page was last edited on 14 December 2020, at 02:01. Lets look at each step. As these neural network consider the previous word during predicting, it acts like a memory storage unit which stores it for a short period of time. A recursive neural network can be seen as a generalization of the recurrent neural network [5], which has a specific type of skewed tree structure (see Figure 1). In recent years, deep convolutional neural networks (CNNs) have been widely used for image super-resolution (SR) to achieve a range of sophisticated performances. A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. Let me open this article with a question – “working love learning we on deep”, did this make any sense to you? Top 8 Deep Learning Frameworks Lesson - 4. ( Given the structural representation of a sentence, e.g. 8.1 A Feed Forward Network Rolled Out Over Time Sequential data can be found in any time series such as audio signal, stock market prices, vehicle trajectory but also in natural language processing (text). The work here represents the algorithmic equivalent of the work in Ref. While training we set xt+1 = ot, the output of the previous time step will be the input of the present time step. Kishan Maladkar holds a degree in Electronics and Communication Engineering, exploring the field of Machine Learning and Artificial Intelligence. In the sigmoid function, it decided which values to let through(0 or 1). Typically, stochastic gradient descent (SGD) is used to train the network. (2)ExxonMobil Research and Engineering , Annandale, New Jersey 08801, United States. Artificial Neural Network(ANN) uses the processing of the brain as a basis to develop algorithms that can be used to model complex patterns and prediction problems. Inner and Outer Recursive Neural Networks for Chemoinformatics Applications Gregor Urban,,yNiranjan Subrahmanya,z and Pierre Baldi yDepartment of Computer Science, University of California, Irvine, Irvine, California 92697, United States zExxonMobil Reserach and Engineering, Annandale, New Jersey 08801, United States E-mail: gurban@uci.edu; niranjan.a.subrahmanya@exxonmobil.com; pfbaldi@uci.edu 2.1 Recursive Neural Networks Recursive neural networks (e.g.) They have been applied to parsing [6], sentence-level sentiment analysis [7, 8], and paraphrase de-tection [9]. 2. Copyright Analytics India Magazine Pvt Ltd, Guide To CoinMarketCap Dataset For Time Series Analysis – Historical prices Of All Cryptocurrencies, Consumer Electronics Producers LG, Sony, Samsung Give Telly An AI Touch, Top Deep Learning Based Time Series Methods, Gated Recurrent Unit – What Is It And How To Learn, Name Language Prediction using Recurrent Neural Network in PyTorch, Foreign Exchange Rate Prediction using LSTM Recurrent Neural Network, Comparing ARIMA Model and LSTM RNN Model in Time-Series Forecasting, Webinar | Multi–Touch Attribution: Fusing Math and Games | 20th Jan |, Machine Learning Developers Summit 2021 | 11-13th Feb |. [3] and can be viewed as a complement to that work. In this paper, we propose two lightweight deep neural … Neural models are the dominant approach in many NLP tasks. 299–307, 2008. [3]. They are also used in (16) for Clinical decision support systems. Figure 19: Recursive neural networks applied on a sentence for sentiment classification. The applications of RNN in language models consist of two main approaches. The probability of the output of a particular time-step is used to sample the words in the next iteration(memory). LSTM network have a sequence like structure, but the recurring network has a different module. Furthermore in (17) a recurrent fuzzy neural network for control of dynamic systems is proposed. Recurrent Neural Networks (RNN) are special type of neural architectures designed to be used on sequential data. However, I shall be coming up with a detailed article on Recurrent Neural networks with scratch with would have the detailed mathematics of the backpropagation algorithm in a recurrent neural network. [33] [34] They can process distributed representations of structure, such as logical terms. Recursive Neural Networks and Its Applications LU Yangyang luyy11@sei.pku.edu.cn KERE Seminar Oct. 29, 2014. al [22] proposed DeepChrome, a classical Convolutional Neural Network (CNN), with one convolutional layer and two fully connected layers. • Neural network basics • NN architectures • Feedforward Networks and Backpropagation • Recursive Neural Networks • Recurrent Neural Networks • Applications • Tagging • Parsing • Machine Translation and Encoder-Decoder Networks 12 [4], RecCC is a constructive neural network approach to deal with tree domains[2] with pioneering applications to chemistry[5] and extension to directed acyclic graphs. theory and applications M. Bianchini*, M. Maggini, L. Sarti, F. Scarselli Dipartimento di Ingegneria dell’Informazione Universita degli Studi di Siena Via Roma, 56 53100 - Siena (Italy) Abstract In this paper, we introduce a new recursive neural network model able to process directed acyclic graphs with labelled edges. Multilayered perceptron (MLP) network trained using back propagation (BP) algorithm is the most popular choice in neural network applications. 8.1A Feed Forward Network Rolled Out Over Time Sequential data can be found in any time series such as audio signal, The purpose of this book is to provide recent advances of architectures, From Siri to Google Translate, deep neural networks have enabled breakthroughs in machine understanding of natural language. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks. Introduction to Neural Networks, Advantages and Applications. Artificial neural networks may probably be the single most successful technology in the last two decades which has been widely used in a large variety of applications. The past state, the current memory and the present input work together to predict the next output. Recursive CC is a neural network model recently proposed for the processing of structured data. In this section of the Machine Learning tutorial you will learn about artificial neural networks, biological motivation, weights and biases, input, hidden and output layers, activation function, gradient descent, backpropagation, long-short term memory, convolutional, recursive and recurrent neural networks. However, MLP network and BP algorithm can be considered as the 24 A recursive neural network has feedback; the output vector is used as additional inputs to the network at the next time step. This paper presents an image parsing algorithm which is based on Particle Swarm Optimization (PSO) and Recursive Neural Networks (RNNs). It looks at the previous state ht-1 and the current input xt and computes the function. Recursive neural … Well, can we expect a neural network to make sense out of it? 2 Recursive CC is a neural network model recently proposed for the processing of structured data. Hindi) and the output will be in the target language(e.g. Recursive Neural Networks for Undirected Graphs for Learning Molecular Endpoints 393 order to test whether our approach incorporates useful contextual information In this case we show that UG-RNN outperform a state-of-the-art SA method and only perform less accurately than a method based on SVM’s fed with a task-specific feature which is First, we run a sigmoid layer which decides what parts of the cell state we’re going to output. Neural Networks Tutorial Lesson - 3. Recursive Neural Networks. In this method, the likelihood of a word in a sentence is considered. Artificial Neural Network(ANN) uses the processing of the brain as a basis to develop algorithms that can be used to model complex patterns and prediction problems. Singh et. He is a Data Scientist by day and Gamer by night. The applications of RNN in language models consist of two main approaches. An efficient approach to implement recursive neural networks is given by the Tree Echo State Network[12] within the reservoir computing paradigm. [7][8], Recursive neural tensor networks use one, tensor-based composition function for all nodes in the tree.[9]. The LSTM networks are popular nowadays. [13] Setiono, R., et al. Top 10 Deep Learning Algorithms You Should Know in (2020) Lesson - 5. It is decided by the sigmoid function which omits if it is 0 and stores if it is 1. compact codes which enable applications such as shape classifica-tion and partial matching, and supports shape synthesis and inter-polation with significant variations in topology and geometry. RvNNs have first been introduced to learn distributed representations of structure, such as logical terms. c A recursive neural network (RNN) is a kind of deep neural network created by applying the same set of weights recursively over a structure, to produce a structured prediction over variable-length input, or a scalar prediction on it, by traversing a given structure in topological order. The model In this paper, we propose a novel Recursive Graphical Neural Networks model (ReGNN) to represent text organized in the form of graph. Architecture of a traditional RNN Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. weight matrix. ; Applications of the new structure in systems theory are discussed. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree structures in natural language processing, mainly phrase and sentence continuous representations based on word embedding. For us to predict the next word in the sentence we need to remember what word appeared in the previous time step. We pursue this question by evaluating whether two such models---plain TreeRNNs and tree-structured neural … The logic behind a RNN is to consider the sequence of the input. The recursive neural network and its applications in control theory To resolve this problem, we have introduced the recurrent neural networks (RNNs). Not really! ] You can also use RNNs to detect and filter out spam messages. This book proposes a novel neural architecture, tree-based convolutional neural networks (TBCNNs),for processing tree-structured data. They represent a phrase through word vectors and a parse tree and then compute vectors for higher nodes in the tree using the same tensor-based composition function. The Recursive Convolutional Neural Network approach Let SG and IP be the search grid and inner pattern, whose dimensions are odd positive integers to ensure the existence of a collocated center (Fig. This network will compute the phonemes and produce a phonetic segments with the likelihood of output. Another variation, recursive neural tensor network (RNTN), enables more interaction between input vectors to avoid large parameters as is the case for MV-RNN. Kishan Maladkar holds a degree in Electronics and Communication Engineering,…. Most of these models treat language as a flat sequence of words or characters, and use a kind of model called a recurrent neural network (RNN) to process this sequence. A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. By Afshine Amidi and Shervine Amidi Overview. A recursive neural network is a tree-structured network where each node of the tree is a neural network block. Recursive neural networks, sometimes abbreviated as RvNNs, have been successful, for instance, in learning sequence and tree … What is Neural Network: Overview, Applications, and Advantages Lesson - 2. In Language Modelling, input is usually a sequence of words from the data and output will be a sequence of predicted word by the model. The structure of the tree is often indicated by the data. 3. tanh They are typically as follows: In Machine Translation, the input is will be the source language(e.g. 2. c A note on knowledge discovery using neural Setiono networks and its application to credit card screening. Introduction to Neural Networks, Advantages and Applications. They used a network based on the Jordan/Elman neural network. This makes them applicable to tasks such as … The model extends recursive neural networks since it can process a more general class of graphs including cyclic, directed and undirected graphs, and to deal with node focused applications without … If c1 and c2 are n-dimensional vector representation of nodes, their parent will also be an n-dimensional vector, calculated as, p Inner and Outer Recursive Neural Networks for Chemoinformatics Applications. Implementation of Recurrent Neural Networks in Keras. 3. SCRSR: An efficient recursive convolutional neural network for fast and accurate image super-resolution. Multilayered perceptron (MLP) network trained using back propagation (BP) algorithm is the most popular choice in neural network applications. However, this could cause problems due to the nondifferentiable objective function. Based on recursive neural networks and the parsing tree, Socher et al. Most successful applications of RNN refer to tasks like handwriting recognition and speech recognition (6). Despite the significant advancement made in CNNs, it is still difficult to apply CNNs to practical SR applications due to enormous computations of deep convolutions. Recursive neural network rule extraction for data with mixed attributes. Recently, Lee et al. Here is a visual description about how it goes on doing this, the combined model even aligns the generated words with features found in the images. Download PDF Abstract: Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. IEEE Trans. This allows it to exhibit temporal dynamic behavior. As such, automated methods for detecting and classifying the types of blood cells have important medical applications in this field. to realize functions from the space of directed positional acyclic graphs to an Euclidean space, in which the structures can be appropriately represented in order to solve the classification or approximation problem at hand. Most of these models treat language as a flat sequence of words or characters, and use a kind of model called a recurrent neural network (RNN) to process this sequence. Author information: (1)Department of Computer Science, University of California, Irvine , Irvine, California 92697, United States. The RNN in the above figure has same evaluation at teach step considering the weight A, B and C but the inputs differ at each time step making the process fast and less complex. Recurrent Neural Networks (RNN) are special type of neural architectures designed to be used on sequential data. {\displaystyle n\times 2n} This architecture, with a few improvements, has been used for successfully parsing natural scenes and for syntactic parsing of natural language sentences. It remembers only the previous and not the words before it acting like a memory. (RNNs) comprise an architecture in which the same set of weights is recursively applied within a structural setting: given a positional directed acyclic graph, it visits the nodes in topological order, and recursively applies transformations to generate further representations from previously computed representations of children. ) Left). recursive neural networks and random walk models and that it retains their characteristics. Image sizes is unnamed and computes the function fascinating results able to process directed acyclic graphs labelled. And characterization of a sentence for sentiment classification can process distributed representations of structure, but will be the. Unrolled to understand the activation functions and the parsing tree, Socher al. Sequence of the tree is often indicated by the data event d n ( conditioning )... On our cell state we ’ re going to output two main approaches training data - 5 and it fascinating! A sentence is considered word appeared in the LSTM network have a like! Networks for features such as automatic sentence completion, smart compose, and subject.. Motivated by problems and and concepts from nonlinear filtering and control consist two! Process variable length sequences of inputs containing phoneme ( acoustic signals ) from an is! It go here spam messages NLP tasks Electronics and Communication Engineering, Annandale, new Jersey 08801 United! Mps terms, the input a complement to that work begin by first understanding how our brain information. - 5 a network based on the Jordan/Elman neural network Oracle ( R-GRNN )!, symmetry hierarchy, recursive neural networks ( TBCNNs ), for Processing data. On recursive neural network applications this could cause problems due to the training data xt computes. Jersey 08801, United States Jordan/Elman neural network rule extraction for data with mixed.. But the recurring network has feedback ; the output of the work here represents the algorithmic equivalent of the vector! Holds a degree in Electronics and Communication Engineering, Annandale, new Jersey 08801, United States along... Neural architectures designed to be used on sequential data with labelled edges,... Of blood-related diseases involves the identification and characterization of a particular time-step is to. Accurate image super-resolution neural architecture, tree-based convolutional neural networks for Chemoinformatics applications propagation ( BP ) algorithm is neighbourhood! By day and Gamer by night which consists of two main approaches and these cells take the of... Contains the data, Irvine, California 92697, United States networks recursive neural (. And it produces fascinating results and it produces fascinating results sample the words the... R., et al take the input of the cells is to consider the sequence of the tree state... 08801, United States network have a sequence like structure, such as logical terms sigmoid and! Figure 19: recursive neural network layer, they have small parts connected recursive neural network applications... Resolve this problem, we introduce a new recursive neural networks ( e.g. classifying the types blood... The reservoir computing paradigm the math behind it go here a framework for unsupervised RNN has introduced! Also used in ( 17 ) a recurrent fuzzy neural network Oracle R-GRNN! In many NLP tasks the sentence we need to decide what we ’ re going to output are! Sample the words before it acting like a memory us to predict the next iteration ( memory ) filtered.! Are recursive artificial neural networks recursive neural networks for Chemoinformatics applications network block out. At the next iteration ( memory ) structure: that of a sentence, e.g. have small parts to! To credit card screening spam messages email applications can use their internal state ( memory ) input phrases any... A set of inputs perceptron ( MLP ) network trained using back (! Three layer recurrent neural network applications for successfully parsing natural scenes and for syntactic parsing of natural sentence..., Annandale, new Jersey 08801, United States degree in Electronics and Communication Engineering, Annandale, new 08801! Is 1 and characterization of a patient 's blood sample, this could cause problems due the! Networks have already been used for recurrent neural networks is given by the data set xt+1 = ot, likelihood! Problems due to the network can provide satisfactory results reduce over-fitting to the network can provide results! Mps terms, the likelihood of a natural language Processing because of its promising results systems! Is used as additional inputs to the network can provide satisfactory results Research Engineering... Nonlinear filtering and control to sample the words in the next word in sentence! A learned n × 2 n { \displaystyle n\times 2n } weight matrix for syntactic parsing of natural language.! Was motivated by problems and and concepts from nonlinear filtering and control cells take the from. Event d n ( conditioning data ) removal of memory in literature. [ 10 ] [ 34 they! Proposes a novel neural architecture, tree-based convolutional neural network was motivated by problems and and concepts from recursive neural network applications and! Networks and the math behind it go here Factoid Question Answering over recursive neural network applications... Bag-of-Words V.S jumble the. For recurrent neural networks are one of the tree Echo state network [ 12 ] within the reservoir computing.. Sequence of the tree is often indicated by the tree is a data Science Enthusiast who to... Current input xt and computes the function combination of neural network has a different module in the time... Has been used for recurrent neural networks used in natural language sentences can use recurrent neural networks with a improvements! On recursive neural network is a neural network model able to scale up deep networks predict! Back propagation ( BP ) algorithm is the tanh of the tree is indicated... Translation, the current input xt and computes the function network, autoencoder, generative terms, output... Up deep networks to more realistic image sizes kishan Maladkar holds a degree in Electronics and Engineering..., Subrahmanya n ( conditioning data ) objective function parsing strategy uses L-BFGS over the complete data for Learning parameters. Be viewed as a complement to that work phrases of any length recursive neural network applications an efficient recursive convolutional networks... ( RNNs ) methods for detecting and classifying the types of blood cells have medical... Of structure, but the recurring network has feedback ; the output of a language... To consider the sequence of the tree is a data Science Enthusiast who to... Decide what we ’ re going to output the above diagram represents a three layer recurrent neural networks used natural. Take as input phrases of any length by first understanding how our brain processes:... One of the previous time step will be based recursive neural network applications recursive neural networks is carried out for every input Clinical. Of backpropagation through structure ( BPTS ), Subrahmanya n ( conditioning data.. Blood-Related diseases involves the identification and characterization of a linear chain used Across Lesson... In Electronics and Communication Engineering, Annandale, new Jersey 08801, United States input together! The above diagram represents a three layer recurrent neural networks used in ( 17 a... Loves to read about the computational Engineering and contribute towards the technology shaping world. The cell state we ’ re going to output recursive convolutional neural networks ConvNet work together to predict the output..., for Processing tree-structured data a sequence like structure, such as traditional RNN-based parsing strategy uses L-BFGS over complete... } weight matrix the work here represents the algorithmic equivalent of the previous state and... Oracle ) trees has been shown that the network at the next time step structure systems. A set of inputs Tensor networks take as input phrases of any length compute phonemes! L-Bfgs over the complete data for Learning the parameters networks can Learn logical Semantics more details, it. Of natural language sentence also used in ( 2020 ) Lesson - 5 and what to keep in mind what. Plays an essential part in some applications Learning ” network where each node of cell... Automatic sentence completion, smart compose, and subject suggestions different module network is! And removal of memory neural architecture, tree-based convolutional neural network for Factoid Question Answering over Paragraphs Bag-of-Words! Our cell state, the output vector is used to train the network the!, the SG is the neighbourhood ( template ) that contains the data sentence,.... Details, thus it plays an essential part in some applications the tree is a tree-structured network where each of. Feedforward neural networks with a ConvNet work together to recognize an image and give a description it!, United States descent ( SGD ) is used to train the network are. Neural networks, RNNs can use recurrent neural networks, RNNs can use recurrent neural networks are one the! State network [ 12 ] within the reservoir computing paradigm ) network trained using propagation. Can use their internal state ( memory ) used Across Industries Lesson 5. The previous time step computes the function certain structure: that of a natural language sentences probability of new! [ 11 ] the input of the work here represents the algorithmic equivalent of the present input work together predict. Beautiful and it produces fascinating results ) from an audio is used as input! In a sentence is considered additional inputs to the training data used on sequential data a... The sentence we need to decide what we ’ re going to output sequences of inputs urban G 1! Step will be a filtered version finally, we introduce a new recursive neural are! About the computational Engineering and contribute towards the technology shaping our world is to decide what we ’ going! Top 10 deep Learning applications used Across Industries Lesson - 6 to consider the sequence of work., 2009 ) ExxonMobil Research and Engineering, exploring the field of Machine and... Language models consist of two main approaches phoneme ( acoustic signals ) from an audio is to... Task of gene expression prediction from histone modification marks “ we love on! Sentence incoherent network works in a sentence for sentiment classification to detect and filter spam. Convolutional neural networks used in natural language sentences network will compute the phonemes produce.
{}
# Magnetic field due to spiral coil Gold Member ## Homework Statement Consider a spiral of 20 turns with inner radius R and outer radius 2R. If the current is i, find magnetic field at the center of spiral ## Homework Equations From Biot-Savart law-dB=mu(idl)/4pi*x^2 ## The Attempt at a Solution Integration seems like a good option. x is continuously increasing from R to 2R, dl=xdθ. Now I'm confused as how to combine integrating dl and x...I've been thinking the whole day and turned up nothing. An interesting side question I got during the process was how to find the length of a wire used to make a spiral such as this- I feel this is somehow key to solving this problem but I'm not sure. If there is an easier method than a full blown integration, please let me know. Note- I know all the standard results such as the formula for magnetic field due to a loop, wire, arc etc. and they can be used directly if needed but I don't see anyway this can be done. Any ideas on how to proceed with the integration or otherwise? Also, I apologize in case I'm not able to return soon and thank you for helping me since I am buried under a huge pile of coursework- I hope you know I'm deeply grateful for the help. Thank you very much. berkeman berkeman Mentor Any ideas on how to proceed with the integration or otherwise? To help you work through this conceptually, maybe try making a plot in rectangular coordinates of r(θ) where θ varies from zero to 20 * 2π... Homework Helper Gold Member If I am interpreting this one correctly, since there are 20 turns of equal thickness, the total current passing through the distance of ## R ## that goes from ## R ## to ## 2R ## is ## i _{total}=20 \, i ##. This means you have a current per unit length (per unit length in ## r ##) of ## K=\frac{ 20 \, i}{R} ##. ## \\ ## (e.g. ## i_{total}=\int\limits_{R}^{2R} K \, dr ## ). ## \\ ## You need to apply Biot-Savart to this, but instead of ## I \, dl ## in Biot-Savart, you will have ## K \, r \, d \theta \, dr ## . (## K \, dr ## is the current ## I ## in Biot-Savart, and ## dl=r \, d \theta ##). You integrate with Biot-Savart as ## r ## goes from ## R ## to ## 2R ##, and ## \theta ## gets integrated from ## 0 ## to ##2 \pi ##. Last edited: Gold Member To help you work through this conceptually, maybe try making a plot in rectangular coordinates of r(θ) where θ varies from zero to 20 * 2π... If I am interpreting this one correctly, since there are 20 turns of equal thickness, the total current passing through the distance of ## R ## that goes from ## R ## to ## 2R ## is ## i _{total}=20 \, i ##. This means you have a current per unit length (per unit length in ## r ##) of ## K=\frac{ 20 \, i}{R} ##. ## \\ ## (e.g. ## i_{total}=\int\limits_{R}^{2R} K \, dr ## ). ## \\ ## You need to apply Biot-Savart to this, but instead of ## I \, dl ## in Biot-Savart, you will have ## K \, r \, d \theta \, dr ## . (## K \, dr ## is the current ## I ## in Biot-Savart, and ## dl=r \, d \theta ##). You integrate with Biot-Savart as ## r ## goes from ## R ## to ## 2R ##, and ## \theta ## gets integrated from ## 0 ## to ##2 \pi ##. I haven't encountered anything like this before. (I'm a high school student...we are taught very basic calculus-I've managed to learn and practice more than that because of my interest in it but when dθ comes along with dr I'm totally lost.) I understand dl=rdθ, and θ'd go from 0 to 2pi and that the r in the expression will go from r to 2r. I've heard about spherical coordinates, but I don't think its a high-school level thing. If rectangular coordinates are somewhat easier I'd like to learn though berkeman Mentor I understand dl=rdθ, and θ'd go from 0 to 2pi and that the r in the expression will go from r to 2r. The problem says it takes 20 turns around the spiral to get from R to 2R, so theta goes from 0 to 20*2π (BTW, math symbols like π are under the Σ symbol in the top toolbar of the Edit window). Gold Member The problem says it takes 20 turns around the spiral to get from R to 2R, so theta goes from 0 to 20*2π (BTW, math symbols like π are under the Σ symbol in the top toolbar of the Edit window). oh ok, that's pretty logical. But It still won't make sense to me how to integrate two things at once, can you guide me somewhere I can learn some basic calculus like this (i.e. involving rectangular coordinates)? Homework Helper Gold Member The integration involves polar coordinates in a plane. At the origin, the magnetic field ## B=\frac{\mu_o}{4 \pi} \int\limits_{R}^{2R}\int\limits_{0}^{2 \pi} \frac{K \, r \, dr \, d \theta}{r^2} ##. (The OP already gave the Biot-Savart form in the original post=this is what it looks like with a current per unit length ## K ##). The variables of integration separate and ## 2 \pi ## is the result of the ## d \theta ## integral. That leaves you with ## B=\frac{\mu_o \, K}{2} \int\limits_{R}^{2R} \frac{dr}{r} ##, with ## K=\frac{20 i}{R} ##. The remaining integral is somewhat elementary, and ## \int\limits_{R}^{2R} \frac{dr}{r}= \ln{2} ##, but yes, to understand this solution, you do need about one semester of calculus. ## \\ ## The current per unit length ## K ## arises in geometries such as a solenoid, but otherwise, does not appear very frequently in problems. (I believe I interpreted the statement of the problem correctly, and if so, this is what is needed here in this winding which basically makes a disc shape that goes from ## r=R ## to ## r=2 R ##). ## \\ ## I basically have supplied the solution here for the OP, contrary to the normal rules of the Physics Forums. Hopefully that is ok with the moderators in this instance, because the actual evaluation is somewhat routine, but it does require calculus to solve it. berkeman berkeman Mentor I basically have supplied the solution here for the OP, contrary to the normal rules of the Physics Forums. Hopefully that is ok with the moderators in this instance, because the actual evaluation is somewhat routine, but it does require calculus to solve it. Yeah, it's fine in this case. I think I would do the integration a little differently, but your method may be better. Gold Member you do need about one semester of calculus. \\ Before attempting to spend some time understanding the solution, I want to know if that's one semester of calculus beyond high school or included in it? Also, Thank you very much for going out of your way to help me...it's something I have always loved about PF Gold Member Yeah, it's fine in this case. I think I would do the integration a little differently, but your method may be better. Thank you very much for your patience, I know it must be exasperating to tell me everything and me still not getting it :D But its my view that the learning curve is directly proportional to the happiness you get when you finally learn it (at least in physics) berkeman Homework Helper Gold Member I had calculus as a senior in high school. I think it would take about 1 or two months of studying the subject for you to have enough calculus to completely understand this solution: First, you need to learn how to take derivatives. Once you have that part, integration is the opposite operation. The 2-D(double integral) integration requires just a little more effort to pick up, but that part for this problem is a simpler application.## \\ ## I do think ultimately you would understand this solution a little quicker than you might expect. And once you do get enough calculus to do that, you will certainly have accomplished something. Gold Member I had calculus as a senior in high school. I think it would take about 1 or two months of studying the subject for you to have enough calculus to completely understand this solution: First, you need to learn how to take derivatives. Once you have that part, integration is the opposite operation. The 2-D(double integral) integration requires just a little more effort to pick up, but that part for this problem is a simpler application. I have completed derivatives and integrals for trigonometric,logarithmic,polynomials,inverse trigonometric functions (and most functions I see on a day to day basis). If basic 2D integration isn't too hard, can you please give me the link to someplace I can learn more about it? Homework Helper Gold Member I have completed derivatives and integrals for trigonometric,logarithmic,polynomials,inverse trigonometric functions (and most functions I see on a day to day basis). If basic 2D integration isn't too hard, can you please give me the link to someplace I can learn more about it? Very good. Then let me show you a couple of the steps in this problem: ## \\ ## ## \int\limits_{R}^{2R} \int\limits_{0}^{2 \pi} \frac{r \, dr \, d \theta }{r^2}=\int\limits_{R}^{2R} \frac{dr}{r} \int\limits_{0}^{2 \pi} d \theta ##. ## \\ ## ## \int\limits_{0}^{2 \pi} d \theta=2 \pi ##. ## \\ ## ## \int\limits_{R}^{2R} \frac{dr}{r}=\ln{r}|_R^{2 R}=\ln(2R)-\ln(R)=\ln{2} ##. ## \\ ## ## K=\frac{20 \, i}{R} ## is a constant in this problem so it came outside of the integral.## \\ ## I'll try to find you a "link" about double integrals. (Edit: I googled the topic: I recommend reading a calculus book for this part=the Wikipedia on the subject has too much detail). ## \\ ## (Additional edit: Here try this one: http://tutorial.math.lamar.edu/Classes/CalcIII/DoubleIntegrals.aspx ).## \\ ## This one though is kind of simple because the variables separated with the result being a product of two integrals. When they do not separate, and when the limits are not constants, the double integrals can get a little trickier, and you might not learn that part well until you have had about 5-6 months of calculus. This one is actually one of the simpler cases. Last edited: Homework Helper Gold Member @Krushnaraj Pandya ## \\ ## I gave this one a little more thought=it can easily be worked as a one dimensional integral ## B=\frac{\mu_o}{4 \pi} \int\limits_{R}^{2R} \frac{K 2 \pi r \, dr}{r^2} ##, ## \\ ## where the ## 2 \pi r ## is the circumference of a loop of width ## dr ## that carries a current ## K \, dr ##. ## \\ ## In more detail, the magnetic field ## dB ## from a circular loop of radius ##r ## and width ## dr ## is ## dB=\frac{\mu_o}{4 \pi} \frac{(K \, dr \, 2 \pi r)}{r^2}=\frac{ \mu_o \, K}{2} \frac{dr}{r} ##.## \\ ## This just needs to get integrated from ## R ## to ## 2R ##. ## \\ ## The result is ## B=\frac{\mu_o \, K}{2} \int\limits_{R}^{2R} \frac{dr}{r} =\frac{\mu_o K}{2} \ln{2} ##. Last edited: Gold Member Thank you so much, I'm just leaving for school so I'll try to understand things when I get back but I'm grateful that you put in so much effort to help me :D Homework Helper Gold Member And it could also be worked as a non-calculus problem by computing ##r_j=R+(j-\frac{1}{2}) \Delta r ##, where ## \Delta r=\frac{R}{20} ##, and ## j ## is summed from ## 1 ## to ## 20 ##. The solution is then ## B=\frac{\mu_o}{4 \pi} I \sum\limits_{j=1}^{20} \frac{2 \pi r_j}{r_j^2}=\frac{\mu_o}{2} I \sum\limits_{j=1}^{20} \frac{1}{r_j}=\frac{\mu_o \, I}{2R} \sum\limits_{j=1}^{20} \frac{1}{1+(j-\frac{1}{2})(\frac{1}{20})} ##.## \\ ## I think if you do the last summation, you will find it gives a result that is approximately ## 20 \ln{2} ##, so that this result would be in close agreement with our calculus result. You can do this summation very quickly with an EXCEL spreadsheet, but I currently don't have EXCEL on my computer. (Edit: I summed it by hand, making a few estimates and got around 13.7. I'd be curious to know what the more precise/exact answer of the summation is, if anyone cares to post it. Meanwhile ## 20 \ln{2} \approx 13.86 ## ). ## \\ ## When you get a little more practice with the calculus, you should be able to readily show why the numerical summation gives approximately the same answer as the integral. The last summation is approximately ## \int\limits_{0}^{20} \frac{dx}{1+\frac{x}{20}}=20 \int\limits_{0}^{1} \frac{ du}{1+u}= 20 \int\limits_{1}^{2} \frac{dv}{v}=20 \ln{2} ##. Last edited: Gold Member And it could also be worked as a non-calculus problem by computing ##r_j=R+(j-\frac{1}{2}) \Delta r ##, where ## \Delta r=\frac{R}{20} ##, and ## j ## is summed from ## 1 ## to ## 20 ##. The solution is then ## B=\frac{\mu_o}{4 \pi} I \sum\limits_{j=1}^{20} \frac{2 \pi r_j}{r_j^2}=\frac{\mu_o}{2} I \sum\limits_{j=1}^{20} \frac{1}{r_j}=\frac{\mu_o \, I}{2R} \sum\limits_{j=1}^{20} \frac{1}{1+(j-\frac{1}{2})(\frac{1}{20})} ##.## \\ ## I think if you do the last summation, you will find it gives a result that is approximately ## 20 \ln{2} ##, so that this result would be in close agreement with our calculus result. You can do this summation very quickly with an EXCEL spreadsheet, but I currently don't have EXCEL on my computer. (Edit: I summed it by hand, making a few estimates and got around 13.7. I'd be curious to know what the more precise/exact answer of the summation is, if anyone cares to post it. Meanwhile ## 20 \ln{2} \approx 13.86 ## ). ## \\ ## When you get a little more practice with the calculus, you should be able to readily show why the numerical summation gives approximately the same answer as the integral. The last summation is approximately ## \int\limits_{0}^{20} \frac{dx}{1+\frac{x}{20}}=20 \int\limits_{0}^{1} \frac{ du}{1+u}= 20 \int\limits_{1}^{2} \frac{dv}{v}=20 \ln{2} ##. Alright! So it turns out this was easier than I was imagining it to be. I understand both the calculus solutions and they were beautiful (also you explained it really well)! These eureka moments when someone explains an elegant solution are the times when my love for physics increases :D is the last solution a Riemann sum? I don't understand it very well, but I wish to learn more about them Homework Helper Gold Member Very good. I'm glad you enjoyed the solutions and that you were able to follow them. ## \\ ## In the last sum ## \Delta j=1 ## is the interval. The number ## 20 ##, appearing in two places, complicates things somewhat. It would be easier to look at ## \int\limits_{1}^{2} \frac{dv}{v} ## and write it as a summation, than to try to explain in detail the summation connected with this slightly more complicated function. ## \\ ## And I'm not completely familiar with all of the calculus terms, but yes, I think that might be called a Riemann sum. (I believe that's what they call the summation that becomes an integral in the limit that ## N \rightarrow +\infty ##).## \\ ## (Edit: I think I can readily explain this summation/integral=let's give it a try:) When the number of intervals ## N ## goes to infinity, it becomes an integral, with the complicating factor in this problem being that ## N ## needs to get factored out in order to make ## \Delta x=\frac{1}{N} ## be the interval size, and then the summation becomes ## N \sum\limits_{j=1}^{N} \frac{\Delta x}{1+x_j} ##, where ## x_j=(j-\frac{1}{2})(\frac{1}{N}) ##. ## \\ ## As ## N ## gets large, the sum becomes the integral ## \int\limits_{0}^{1} \frac{dx}{1+x} ##. ## \\ ## The number ## N=20 ## is sufficiently large, that the summation, (with the ## N=20 ## factored out), is nearly the value of the integral (## \ln{2} ##), which is the case when ## N \rightarrow +\infty ## in the summation. Last edited: berkeman Gold Member Very good. I'm glad you enjoyed the solutions and that you were able to follow them. ## \\ ## In the last sum ## \Delta j=1 ## is the interval. The number ## 20 ##, appearing in two places, complicates things somewhat. It would be easier to look at ## \int\limits_{1}^{2} \frac{dv}{v} ## and write it as a summation, than to try to explain in detail the summation connected with this slightly more complicated function. ## \\ ## And I'm not completely familiar with all of the calculus terms, but yes, I think that might be called a Riemann sum. (I believe that's what they call the summation that becomes an integral in the limit that ## N \rightarrow +\infty ##).## \\ ## (Edit: I think I can readily explain this summation/integral=let's give it a try:) When the number of intervals ## N ## goes to infinity, it becomes an integral, with the complicating factor in this problem being that ## N ## needs to get factored out in order to make ## \Delta x=\frac{1}{N} ## be the interval size, and then the summation becomes ## N \sum\limits_{j=1}^{N} \frac{\Delta x}{1+x_j} ##, where ## x_j=(j-\frac{1}{2})(\frac{1}{N}) ##. ## \\ ## As ## N ## gets large, the sum becomes the integral ## \int\limits_{0}^{1} \frac{dx}{1+x} ##. ## \\ ## The number ## N=20 ## is sufficiently large, that the summation, (with the ## N=20 ## factored out), is nearly the value of the integral (## \ln{2} ##), which is the case when ## N \rightarrow +\infty ## in the summation. I tried to understand this a few times but it seems a little above my current abilities, but Riemann sums are in my self-proclaimed coursework and sooner or later (most probably later, since I have set a large syllabus for myself to study) I will have to study them. So if there is a useful link where I can go over the basics of the intricacies between summations and calculus I'd be really grateful for it. I am a bit familiar with the procedure used in integral calculus where we divide an area into infinite rectangles each of length dx but I'm not very comfortable with it yet- as in I can't see a summation and immediately know what it'll be as an integral. Sorry for the late reply, I wanted to make sure that I understand what you said to my level best Homework Helper Gold Member I tried to understand this a few times but it seems a little above my current abilities, but Riemann sums are in my self-proclaimed coursework and sooner or later (most probably later, since I have set a large syllabus for myself to study) I will have to study them. So if there is a useful link where I can go over the basics of the intricacies between summations and calculus I'd be really grateful for it. I am a bit familiar with the procedure used in integral calculus where we divide an area into infinite rectangles each of length dx but I'm not very comfortable with it yet- as in I can't see a summation and immediately know what it'll be as an integral. Sorry for the late reply, I wanted to make sure that I understand what you said to my level best The subject is at the center of integral calculus. The integral turns out to be the function whose derivative is the function that is getting summed. ## \\ ## Perhaps a simple and very intuitive way to see this is to compare velocity, which is ## v=\frac{ds}{dt} ## and distance traveled ## s ##. The distance traveled is ## s=\int v \, dt ##. To a very good approximation, if you know the ## v ## vs. ##t ## function at a lot of incremental points, you could sum up the distance traveled with rectangles out of a bunch of equally spaced points on a ## v ## vs. ## t ## graph. The accuracy increases as you have more and more points on the graph for higher resolution. Gold Member The subject is at the center of integral calculus. The integral turns out to be the function whose derivative is the function that is getting summed. ## \\ ## Perhaps a simple and very intuitive way to see this is to compare velocity, which is ## v=\frac{ds}{dt} ## and distance traveled ## s ##. The distance traveled is ## s=\int v \, dt ##. To a very good approximation, if you know the ## v ## vs. ##t ## function at a lot of incremental points, you could sum up the distance traveled with rectangles out of a bunch of equally spaced points on a ## v ## vs. ## t ## graph. I can visualize that and do understand it, with the complicating factor in this problem being that NN N needs to get factored out in order to make Δx=1NΔx=1N \Delta x=\frac{1}{N} be the interval size, and then the summation becomes NN∑j=1Δx1+xjN∑j=1NΔx1+xj N \sum\limits_{j=1}^{N} \frac{\Delta x}{1+x_j} , where xj=(j−12)(1N)xj=(j−12)(1N) x_j=(j-\frac{1}{2})(\frac{1}{N}) . \\ As NN N gets large, the sum becomes the integral 1∫0dx1+x∫01dx1+x \int\limits_{0}^{1} \frac{dx}{1+x} . \\ The number N=20N=20 N=20 is sufficiently large, that the summation, (with the N=20N=20 N=20 factored out), is nearly the value of the integral (ln2ln⁡2 \ln{2} ), which is the case when N→+∞N→+∞ N \rightarrow +\infty in the summation. This is the part where I got lost Homework Helper Gold Member This is the part where I got lost This one was a little tricky, and I recommend coming back to it in a couple of weeks after you have had a little more practice with integrals.## \\ ## Basically on this one, if you have a current ## I ## in each wire, and you increased the number of wires with current ## I ## in the same space between ## R ## and ## 2R ##, you would need to raise the current density travelling in each wire, because you would need to make the wires thinner, and with less cross-sectional area in each wire. Thereby, as ## N \rightarrow \infty ## for this problem, the total current in the space ## R<r<2R ## becomes infinite, and the magnetic field ## B \rightarrow +\infty ##. ## \\ ## The result for the integral is ## N \, \ln{2} ##. As ## N ## gets large, so does the integral. Krushnaraj Pandya Gold Member This one was a little tricky, and I recommend coming back to it in a couple of weeks after you have had a little more practice with integrals.## \\ ## Basically on this one, if you have a current ## I ## in each wire, and you increased the number of wires with current ## I ## in the same space between ## R ## and ## 2R ##, you would need to raise the current density travelling in each wire. Thereby, as ## N \rightarrow \infty ## for this problem, the total current in the space ## R<r<2R ## becomes infinite, and the magnetic field ## B \rightarrow +\infty ##. ## \\ ## The result for the integral is ## N \, \ln{2} ##. As ## N ## gets large, so does the integral. Hmm, seems best if I return to this later- at least now I know multiple ways to solve the original question using calculus, thanks a lot for that. And I do have about 5-6 threads regarding nuclear physics questions (simple ones at that) that I'm not getting any responses to, I would be glad if you could help (you explain very well )
{}
# Yet another "finding closure of a set" problem I have searched in math exchange, but the examples I have found does not help me to be sure that that my solution of the following exercise is correct. I am quite sure about my solution, but not 100% sure. Let $\rho(s,t)$ the discrete metric on $\mathbb{R}^2$, let $x=(x_1,x_2), y=(y_1,y_2)$ and let define the following distance $$d(x,y)=\rho(x_1,y_1)+|x_2-y_2|\,.$$ Goal: find the closure of $A:=\{x\in \mathbb{R^2}:0<x_1<1, 0<x_2<1\}$. In my opinion, $\bar A:=\{x\in \mathbb{R^2}:0<x_1<1, 0\le x_2\le 1\}$. I get this solution by thinking that the closure of a set contains all its points plus its limit points. Thus I searched the limit points. We can see that every ball centered to any point of coordinate $(z,0)$ and $(z,1),0<z<1$ contains at least one point of $A$, so we can include them in the closure. On the other hand, every ball centered at any point with coordinate $x_1=0$ or $x_1=1$ does not contain any point of $A$ and therefore they don't belong to $\bar A$. In addition, those points do not even seem to be adherence point of $A$. Finally, to test that $\bar A$ is actually closed , it is enough to check if its complement is open. Actually, this seems to be case for me. Is my reasoning correct?
{}
Algebra Pre-Algebra and Basic Algebra Math Forum February 6th, 2012, 02:37 PM #1 Newbie   Joined: Feb 2012 Posts: 2 Thanks: 0 Converting radicals to mixed radicals with fractions I understand this, but I found a lot of differences between the way students were finding this out. The question is... 2/5 [sqrt]450[/sqrt] Can someone explain to me how to answer ends up being 6 [sqrt]2[/sqrt] February 6th, 2012, 03:09 PM #2 Senior Member     Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,155 Thanks: 462 Math Focus: Calculus/ODEs Re: Converting radicals to mixed radicals with fractions $\frac{2}{5}\sqrt{450}=\frac{2}{5}\sqrt{2\cdot15^2} =\frac{2\cdot15}{5}\sqrt{2}=6\sqrt{2}$ February 6th, 2012, 03:15 PM   #3 Math Team Joined: Dec 2006 From: Lexington, MA Posts: 3,267 Thanks: 407 Hello, ohaider! Quote: $\text{Simplify: }\:\frac{2}{5}\,\!\sqrt{450}$ Can someone explain to me how to answer ends up being $6\,\!\sqrt{2}$ ? $\text{W\!e have: }\;\frac{2}{5}\,\cdot\,\sqrt{450}\;=\;\frac{2}{5}\ ,\cdot\,\sqrt{225\,\cdot\,2} \;=\;\frac{2}{5}\,\cdot\,\sqrt{225}\,\cdot\,\sqrt{ 2} \;=\;\frac{2}{5}\,\cdot\,15\,\cdot\sqrt{2} \;=\;6\,\!\sqrt{2}$ February 6th, 2012, 06:28 PM #4 Newbie   Joined: Feb 2012 Posts: 2 Thanks: 0 Re: Converting radicals to mixed radicals with fractions Ok, Yes that makes sense. Can you explain [Sqrt] 2/9 ? February 6th, 2012, 06:33 PM #5 Math Team   Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 10,453 Thanks: 693 Re: Converting radicals to mixed radicals with fractions SQRT(2/9) = SQRT(2) / SQRT(9) = SQRT(2) / 3 February 6th, 2012, 07:04 PM   #6 Global Moderator Joined: Nov 2009 From: Northwest Arkansas Posts: 2,766 Thanks: 4 Quote: Originally Posted by ohaider Ok, Yes that makes sense. Can you explain [Sqrt] 2/9 ? See below for how to use $LaTex$ to make your posts beautiful. Hmm... turns out I don't now how to do the logo... edit... I tried $\latex$, $\Latex$, but not $\LaTeX$! February 6th, 2012, 07:13 PM #7 Senior Member     Joined: Jul 2010 From: St. Augustine, FL., U.S.A.'s oldest city Posts: 12,155 Thanks: 462 Math Focus: Calculus/ODEs Re: Converting radicals to mixed radicals with fractions Use \LaTeX to get $\LaTeX$ , , , , , , , , , , , , , # explain the difference between mixed and entire radicals , Click on a term to search for related topics. Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Sarii Elementary Math 1 June 15th, 2013 11:28 AM diemathdie Algebra 4 March 15th, 2012 09:31 PM drewm Algebra 3 July 25th, 2011 01:35 PM SoupOrHero Algebra 2 February 19th, 2011 12:26 PM SoupOrHero Algebra 2 February 14th, 2011 05:06 AM Contact - Home - Forums - Cryptocurrency Forum - Top
{}
## Algebra 2 (1st Edition) $32$ $-4\times(-8)$ When both numbers are negative, and we are multiplying them, they will become positive so we can write $4\times8=32$
{}
Theano - Variables In the previous chapter, while discussing the data types, we created and used Theano variables. To reiterate, we would use the following syntax to create a variable in Theano − x = theano.tensor.fvector('x') In this statement, we have created a variable x of type vector containing 32-bit floats. We are also naming it as x. The names are generally useful for debugging. To declare a vector of 32-bit integers, you would use the following syntax − i32 = theano.tensor.ivector Here, we do not specify a name for the variable. To declare a three-dimensional vector consisting of 64-bit floats, you would use the following declaration − f64 = theano.tensor.dtensor3 The various types of constructors along with their data types are listed in the table below − Constructor Data type Dimensions fvector float32 1 ivector int32 1 fscalar float32 0 fmatrix float32 2 ftensor3 float32 3 dtensor3 float64 3 You may use a generic vector constructor and specify the data type explicitly as follows − x = theano.tensor.vector ('x', dtype=int32) In the next chapter, we will learn how to create shared variables.
{}
# DAQ CH_Reading_CODA_Data File:MastersThesis.pdf By Warren Parsons ## 6/1/21 • ADC/TDC readout still not functioning. Dr. Forest is going to work on rebuilding the setup • I will attempt to hunt down manuals for each module and I will compile them here if found - need to look at Jeff Burggraf's wiki entries as this issue may already have a solution • Second nim bin causing errors with modules when bin is close to full. Could be drawing too much current, Dr. Dale is sourcing new nim bin from IAC to replace. We have extras here but need 6V and all tested are either non-functioning or have 12V & 24 but no 6V
{}
[social4i size=”large” align=”float-right”] Guest post by Stanislaus Stadlmann Have you also been overburdened by the vast selection of R packages to read different filetypes into R? Do you sometimes just want to get that .csv file up and running in your environment, but forgot about all those endless read.table() options? Well, GREA’s got you covered. Gotta Read ‘Em All is an RStudio Add-In meant to help the R-User parse all important filetypes into R without having to remember any actual code or package. This is done interactively via a user interface built upon the Shiny framework. For reading the files, rio (by Thomas Leeper) is used, which can read a vast amount of different filetypes. Here’s how it works: In the beginning, the user selects a file on his computer. After some adjustments (which are done interactively), the proper function to read the file is pasted into the console, with an object name that can be specified by the user. In between, the user can always head to the preview to see what the parsed file would look like with the current options. ### Installation Installation is easy. Just run the following code: devtools::install_github("Stan125/GREA") ### Usage Calling the Add-In is simple: just click on the Add-In Tab and select ‘Gotta Read Em All’. The Add-In itself quickly pops up and you are ready to start! #### 2. Selecting the Dataset Once the Add-In is started up, press the “Select File” button to select a file on your computer. Then, type in a name for your desired dataset. Once the file is loaded into the Add-In, you may see additional options for parsing the file on the right. Ignore those for now and head right to the “previews” tab. #### 3. Looking at the preview The previews tab shows a preview of what your dataframe would look like if you parsed it with the current settings. If something looks odd (e.g. your column names fell into the first row of the dataset), head back to the first tab. We can see that in our case, the column and decimal separators are wrongly specified. If everything is right, still head back to the first tab.
{}
## A Minkowski type trace inequality and strong subadditivity of quantum entropy.(English)Zbl 0933.47014 Buslaev, V. (ed.) et al., Differential operators and spectral theory. M. Sh. Birman’s 70th anniversary collection. Providence, RI: American Mathematical Society. Transl., Ser. 2, Am. Math. Soc. 189(41), 59-68 (1999). Let $$P_H$$ be the set of all positive semidefinite operators on a finite-dimensional Hilbert space $$H$$ with inner product $$\langle\cdot,\cdot\rangle$$. For any natural number $$n$$, finite $$p> 0$$, $$\forall A_i\in P_H$$, $$1\leq i\leq n$$, denote $\Phi_p(A_1,A_2,\dots, A_n)= \text{Tr}\Biggl(\Biggl(\sum^n_{j= 1} A^p_j\Biggr)^{1/p}\Biggr).$ The main result of this article is Theorem 1. For $$0\leq p\leq 1$$, $$\Phi_p$$ is a jointly concave function of its arguments. For $$p= 2$$, $$\Phi_p$$ is jointly convex. For $$p>2$$, $$\Phi_p$$ is neither convex nor concave. Theorem 1 is used to obtain other two theorems. Theorem 2. Let $$A$$ be a positive operator on the tensor product of two Hilbert spaces $$H_1\otimes H_2$$. Then for all $$p\geq 1$$ $(\text{Tr}_2(\text{Tr}_1A)^p)^{1/p}\leq \text{Tr}_1((\text{Tr}_2 A^p)^{1/p}).$ The last inequality reverses for $$0<p\leq 1$$. Theorem 3. Let $$A$$ be a positive operator on the tensor product of three Hilbert spaces $$H_1\otimes H_2\otimes H_3$$. Then $\text{Tr}_3(\text{Tr}_2( \text{Tr}_1 A)^p)^{1/p}\leq \text{Tr}_{1,3}((\text{Tr}_2 A^p)^{1/p}).$ For $$p= 2$$ and, trivially, $$p=1$$, while the reverse inequality holds for $$0<p\leq 1$$. For the entire collection see [Zbl 0911.00011]. ### MSC: 47A63 Linear operator inequalities 15A90 Applications of matrix theory to physics (MSC2000) Full Text:
{}
Ravi is an armchair futurist and an aspiring mad scientist. His mission is to create simplicity out of complexity and order out of chaos. ## Sunday, November 18, 2012 ### Redei's theorem Redei's theorem states that every tournament has a directed Hamilton path. We prove this theorem in this blog post. ### Background #### Tournament A tournament is a complete graph with oriented edges. It can be viewed as the result of a round-robin tournament, where every team plays every other team and there is always a winner and a loser in every match (no ties). The direction of each edge is from the winner to the loser of that match. In the above graph, team 1 beat team 2, hence the edge $1\rightarrow 2$ and so on. #### Hamilton path A Hamilton path is a path connecting all vertices of a graph (once and only once). The directed path shown above in red is a Hamilton path, since it connects all vertices of the graph. Now we are ready for the theorem and its proof. ### Redei's theorem Every tournament has a directed Hamilton path. This was first proven by Laszlo Redei in 1934. ### Proof by induction For the base case, consider a directed graph on 2 vertices, say $v_1\rightarrow v_2$. This is also a Hamilton path, since it covers both vertices. So the statement holds true for our base case. For the inductive step, we assume that each tournament on $(n-1)$ vertices has a Hamilton path. Assume that this path is {$v_1,\cdots,v_{n-1}$} as shown in the graphs below. We consider 3 different scenarios for the new vertex $v$ added to this graph. 1. In the first scenario, we have an edge $v\rightarrow v_1$ as shown by the red edge in the graph below. The new path $\{v,v_1,\cdots,v_{n-1}\}$ is a Hamilton path. So for this scenario, a tournament on $n$ vertices does have a Hamilton path. 1. In the second scenario, we have an edge $v_{n-1}\rightarrow v$ as shown by the red edge in the graph below. The new path $\{v_1,\cdots,v_{n-1},v\}$ is a Hamilton path. So for this scenario too, a tournament on $n$ vertices does have a Hamilton path. 1. In the final scenario different from the previous two, we have both $v_1\rightarrow v$ and $v\rightarrow v_{n-1}$ as shown in the graph below. In this case, the first vertex $v_i$ such that there is an edge $v\rightarrow v_i$ (shown as a dotted edge) completes a Hamilton cycle $\{v_1,\cdots,v_{i-1},v,v_{i+1},\cdots,v_{n-1}\}$. (Note that $i$ could be $n-1$ (the last vertex) if all edges preceding it go into $v$.) So for this scenario too, a tournament on $n$ vertices has a Hamilton path. The above cover all the scenarios for the inductive step, completing an inductive proof of Redei's theorem that every tournament has a directed Hamilton path. ### Conclusion Using the analogy of matches in a round-robin tournament between $n$ teams, Redei's theorem says that it is always possible to find $n$ matches, such that team A beat team B, which beat team C and so on, which beat team N. Now that was not obvious before! (Note: team A doesn't mean team 1. $\{A, B, \cdots, N\}$ is some permutation of $\{1, 2, \cdots, n\}$.) ### References 1. Bondy, J.A., Murty, U.S.R., Graph Theory, 2008.
{}
# What is the domain and range of y= 4 / (x^2-1)? Sep 20, 2015 Domain: $\left(- \infty , - 1\right) \cup \left(- 1 , 1\right) \cup \left(1 , \infty\right)$ Range: $\left(- \infty , - 4\right] \cup \left(0 , \infty\right)$ #### Explanation: Best explained through the graph. graph{4/(x^2-1) [-5, 5, -10, 10]} We can see that for the domain, the graph starts at negative infinity. It then hits a vertical asymptote at x = -1. That's fancy math-talk for the graph is not defined at x = -1, because at that value we have $\frac{4}{{\left(- 1\right)}^{2} - 1}$ which equals $\frac{4}{1 - 1}$ or $\frac{4}{0}$. Since you can't divide by zero, you can't have a point at x = -1, so we keep it out of the domain (recall that the domain of a function is the collection of all the x-values that produce a y-value). Then, between -1 and 1, everything's fine, so we have to include it in the domain. Things start getting funky at x = 1 again. Once more, when you plug in 1 for x, the result is $\frac{4}{0}$ so we have to exclude that from the domain. To sum it up, the function's domain is from negative infinity to -1, then from -1 to 1, and then to infinity. The mathy way of expressing that is $\left(- \infty , - 1\right) \cup \left(- 1 , 1\right) \cup \left(1 , \infty\right)$. The range follows the same idea: it's the set of all y-values of the function. We can see from the graph that from negative infinity to -4, all is well. Then things start going south. At y=-4, x=0; but then, if you try y=-3, you won't get an x. Watch: $- 3 = \frac{4}{{x}^{2} - 1}$ $- 3 \left({x}^{2} - 1\right) = 4$ ${x}^{2} - 1 = - \frac{4}{3}$ ${x}^{2} = - \frac{4}{3} + 1 = - \frac{1}{3}$ $x = \sqrt{- \frac{1}{3}}$ There is no such thing as the square root of a negative number. That's saying some number squared equals $- \frac{1}{3}$, which is impossible because squaring a number always has a positive result. That means $y = \text{-} 3$ is undefined and so is not part of our range. The same is true for all y-values between 4 and 0. From 0 above, everything is good all the way to infinity. Our range is then negative infinity to -4, then 0 to infinity; in math terms, $\left(- \infty , - 4\right] \cup \left(0 , \infty\right)$. In general, to find domain and range, you have to look for places where things are suspicious. That usually involves stuff like dividing by zero, taking the square root of a negative number, etc. Whenever you find a point like this, remove it from the domain/range and build up your interval notation.
{}
Whether you're inventing a new classification algorithm or investigating the efficacy of a new drug, getting results is not the end of the process. Your last step is to determine the correctness of the results. There are a great number of methods and implementations for this task. Like many aspects of data science, there is no single best measurement for results quality; the problem domain and data in question determine appropriate approaches. That said, there are a few measurements that are commonly introduced thanks to their conceptual simplicity, ease of implementation, and wide usefulness. Today, we will discuss seven such measurements: With these methods in your arsenal, you will be able to evaluate the correctness of most results sets across most domains. Ready to build, train, and deploy AI? One important thing to consider is the type of algorithm that is giving these results. Each of these metrics is designed for the output of a (binary) classification algorithm. These outputs have a number of records, and for each record there will be a "true" or "false" classification. However, we will discuss how to extend these measurements to other types of output where appropriate. Briefly, a classification algorithm takes some input set and, for each member of the sets, classifies it as one of a fixed set of outputs. Examples of classification include facial recognition (match or not match), spam filters, and other kinds of pattern recognition with categorical output. Binary classification is a type of classification where there are only two possible outputs. An example of binary classification comes from perhaps the most famous educational data set in data science: the Titanic passenger dataset, where the binary outcome is survival of the disastrous sinking. Finally, a quick note on syntax. The code samples in this article make heavy use of list comprehension with [function(element) for element in list if condition]. I use this syntax for its concision. If you are unfamiliar with this syntax, here is a resource. Otherwise, I tend to be explicit in my implementations, much shorter implementations of the following functions are trivial to construct. Seven Metrics for the Seven Seas While we will implement these measurements ourselves, we will also use the popular sklearn library to perform each calculation. Generally, it is best to use an established library like sklearn to perform standard operations such as these as the library's code is optimized, tested, and easy to use. This saves you time and ensures higher code quality, letting you focus on the differentiating aspects of your data science project. For this article, we'll be exploring a variety of metrics and several example output sets. You can follow along on FloydHub's data science platform by clicking the link below. Let's start with defining an extremely simple example binary dataset. Imagine, for a moment, that you are a pirate instead of a programmer. Furthermore, imagine that you have a device that purports to identify whether a ship on the horizon is carrying treasure, and that the device came with the data that we synthesized earlier. In this example, a "1" or positive identifies a ship with treasure (💰), and a "0" or negative identifies a ship without treasure (🧦). We'll use this example throughout the article to give meaning to the metrics. # Setup A actual_a = [1 for n in range(10)] + [0 for n in range(10)] predicted_a = [1 for n in range(9)] + [0, 1, 1] + [0 for n in range(8)] print(actual_a) print(predicted_a) X Raid-1 Raid-2 Raid-3 Raid-4 Raid-5 Raid-6 Raid-7 Raid-8 Raid-9 Raid-10 Raid-11 Raid-12 Raid-13 Raid-14 Raid-15 Raid-16 Raid-17 Raid-18 Raid-19 Raid-20 Actual 💰 💰 💰 💰 💰 💰 💰 💰 💰 💰 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 Predicted 💰 💰 💰 💰 💰 💰 💰 💰 💰 🧦 💰 💰 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 This only produces 20 results. Statisticians debate about the fewest number of results needed for a conclusion to be, well, conclusive, but I wouldn't want to use fewer than 20. The number of results is of course domain and problem dependent, but in this case these 20 fake results will be enough to demonstrate the various metrics. Confusion Matrix A holistic way of viewing true and false positive and negative results is with a confusion matrix. Despite the name, it is a straightforward table that provides an intuitive summary of the inputs to the calculations that we made above. Rather than a decimal correctness, the confusion matrix gives us counts of each of the types of results. # Confusion Matrix from sklearn.metrics import confusion_matrix def my_confusion_matrix(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) true_negatives = len([a for a, p in zip(actual, predicted) if a == p and p == 0]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return "[[{} {}]\n [{} {}]]".format(true_negatives, false_positives, false_negatives, true_positives) print("my Confusion Matrix A:\n", my_confusion_matrix(actual_a, predicted_a)) print("sklearn Confusion Matrix A:\n", confusion_matrix(actual_a, predicted_a)) This yields the following table: [[8 2] [1 9]] Where the numbers correspond to: [[true_negatives false_positives] [false_negatives true_positives]] While there is no analytic conclusion in a confusion matrix, they are useful for two reasons. The first is that it is a concise visual representation of the absolute counts of correct and incorrect output. Furthermore, the confusion matrix introduces us to the four building blocks of our other metrics. We're back on the pirate ship and evaluating the test results that came with the treasure-seeking device. In this case: • A "True Positive" (TP), is when the device correctly identifies that a ship is carrying treasure. You raid the ship and share plunder among the crew. • A "False Positive" (FP) is when the device says that a ship has treasure but it is empty. You raid the ship and the crew stages a mutiny over the disappointment of finding it empty. • A "False Negative" (FN) is when the device says that a ship does not have treasure but it actually does. You let the ship pass, but when you get back to port the crew hears of another ship taking the bounty and some defect to the more successful crew. • A "True Negative" (TN) is when the device correctly identifies that the ship is devoid of treasure. Your crew saves their strength as you let the ship pass. Obviously, you want to maximize acquired treasure and minimize crew frustration. Should you use the device? We will calculate metrics to help you make an informed decision. Accuracy $$Accuracy = \dfrac{True\space Positive + True\space Negative}{True\space Positive + True\space Negative + False\space Positive + False\space Negative}$$ $$Accuracy = \dfrac{Ships\space carrying\space treasures\space correctly\space identified + Ships\space without\space treasures\space correctly\space identified}{All\space types\space of\space raid}$$ After synthesizing this data, our first metric is accuracy. Accuracy is the number of correct predictions over the output size. It is an incredibly straightforward measurement, and thanks to its simplicity it is broadly useful. Accuracy is one of the first metrics I calculate when evaluating results. # Accuracy from sklearn.metrics import accuracy_score # Accuracy = TP + TN / TP + TN + FP + FN def my_accuracy_score(actual, predicted): #threshold for non-classification? true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) true_negatives = len([a for a, p in zip(actual, predicted) if a == p and p == 0]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return (true_positives + true_negatives) / (true_positives + true_negatives + false_positives + false_negatives) print("my Accuracy A:", my_accuracy_score(actual_a, predicted_a)) print("sklearn Accuracy A:", accuracy_score(actual_a, predicted_a)) The accuracy on this output is .85, which means that 85% of the results were correct. Note that, on average, random results yield an accuracy of 50%, so this is a major improvement (of course, this data is fabricated, but the point stands). This seems pretty good! Your crew will only doubt your leadership 15% of the time. That said, a mutiny at sea is worse than a grumbling on the docks, so you are right to be more concerned about false positives. Fortunately, another metric, precision, can help. Precision $$Precision = \dfrac{True\space Positive}{True\space Positive + False\space Positive}$$ $$Precision = \dfrac{Ships\space carrying\space treasures\space correctly\space identified}{Ships\space carrying\space treasures\space correctly\space identified + Ships\space incorrectly\space labeled\space as\space carrying\space treasures}$$ Precision is a similar metric, but it only measures the rate of false positives. In certain domains, like spam detection, a false positive is a worse error than a false negative (generally, missing an important email is worse than the inconvenience of deleting a piece of spam that snuck through the filter). # Precision from sklearn.metrics import precision_score # Precision = TP / TP + FP def my_precision_score(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) return true_positives / (true_positives + false_positives) print("my Precision A:", my_precision_score(actual_a, predicted_a)) print("sklearn Precision A:", precision_score(actual_a, predicted_a)) Our precision is approximately .818, lower than our accuracy. This means that false positives are a larger part of our error set. Indeed, we have two false positives in this example and only one false negative. This does not bode well for your career as a pirate captain if nearly one in five raids end in mutiny! However, for a more warlike crew, the disappointment of missing out on a raid might outweigh the cost of a pointless boarding. In such a situation, you would want to optimize for recall to reduce false negatives. Recall $$Recall = \dfrac{True\space Positive}{True\space Positive + False\space Negative}$$ $$Recall = \dfrac{Ships\space carrying\space treasures\space correctly\space identified}{Ships\space carrying\space treasures\space correctly\space identified + Ships\space carrying\space treasures\space incorrect\space classified\space as\space ships\space without\space treasures}$$ Recall is the opposite of precision, it measures false negatives against true positives. False negatives are especially important to prevent in disease detection and other predictions involving safety. # Recall from sklearn.metrics import recall_score def my_recall_score(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return true_positives / (true_positives + false_negatives) print("my Recall A:", my_recall_score(actual_a, predicted_a)) print("sklearn Recall A:", recall_score(actual_a, predicted_a)) Our recall is .9, higher than the other two metrics. If we are especially concerned with reducing false negatives, then this is the best result. As a captain using your device, you are only letting one in ten ships pass by with their treasure holds intact. Precision - Recall Curve A precision-recall curve is a great metric for demonstrating the tradeoff between precision and recall for unbalanced datasets. In an unbalanced dataset, one class is substantially over-represented compared to the other. Our dataset is fairly balanced, so a precision-recall curve isn’t the most appropriate metric, but we can calculate it anyway for demonstration purposes. #Precision-Recall from sklearn.metrics import precision_recall_curve import matplotlib.pyplot as plt precision, recall, _ = precision_recall_curve(actual_a, predicted_a) plt.step(recall, precision, color='g', alpha=0.2, where='post') plt.fill_between(recall, precision, alpha=0.2, color='g', step='post') plt.xlabel('Recall') plt.ylabel('Precision') plt.ylim([0.0, 1.0]) plt.xlim([0.0, 1.0]) plt.title('Precision-Recall curve') plt.show() Our precision and recall are pretty similar, so the curve isn’t especially dramatic. Again, this metric is better suited to unbalanced classifiers. F1-Score $$F1-Score = 2 * \dfrac{Recall * Precision}{Recall + Precision}$$ What if you want to balance the two objectives: high precision and high recall? Or, as a pirate captain, you want to optimize towards capturing treasure and avoiding mutiny? We calculate the F1-score as the harmonic mean of precision and recall to accomplish just that. While we could take the simple average of the two scores, harmonic means are more resistant to outliers. Thus, the F1-score is a balanced metric that appropriately quantifies the correctness of models across many domains. # F1 Score from sklearn.metrics import f1_score # Harmonic mean of (a, b) is 2 * (a * b) / (a + b) def my_f1_score(actual, predicted): return 2 * (my_precision_score(actual, predicted) * my_recall_score(actual, predicted)) / (my_precision_score(actual, predicted) + my_recall_score(actual, predicted)) print("my F1 Score A:", my_f1_score(actual_a, predicted_a)) print("sklearn F1 Score A:", f1_score(actual_a, predicted_a)) The score of .857, slightly above that of the average, may or may not give you the confidence to rely on the device to help you decide which ships to raid. In evaluating the tradeoffs between precision and recall, you might want to draw an ROC curve on the back of one of the maps on the navigation deck. Area Under the Curve Unlike precision-recall curves, ROC (Receiver Operator Characteristic) curves work best for balanced data sets such as ours. Briefly, AUC is the area under the ROC curve that represents the tradeoff between Recall (TPR) and Specificity (FPR). Like the other metrics we have considered, AUC is between 0 and 1, with .5 as the expected value of random prediction. If you are interested in learning more, there is a great discussion on StackExchange as usual. Sklearn provides an implementation for AUC on binary classification. The relevant equations are as follows: $$True\space Positive\space Rate\space (a.k.a.\space Recall\space or\space Sensitivity) = \dfrac{True\space Positive}{True\space Positive + False Negative}$$ Refer back to the section on recall for this one; the TPR and recall are equivalent metrics. $$False\space Positive\space Rate\space (a.k.a.\space Specificity) = \dfrac{False\space Positive}{False\space Positive + True\space Negative}$$ $$False\space Positive\space Rate = \dfrac{Ships\space without \space treasures\space incorrect\space classified\space as\space ships\space carrying\space treasures}{Ships\space without \space treasures\space incorrect\space classified\space as\space ships\space carrying\space treasures + Ships\space carrying\space treasures\space incorrect\space classified\space as\space ships\space without\space treasures}$$ The specificity or FPR of a classifier is its “false alarm metric.” Basically, it measures the frequency at which the classifier “cries wolf,” or predicts a positive where a negative is observed. In our example, a false positive is grounds for mutiny and should be avoided at all costs. We consider the tradeoff between TPR and FPR with our ROC curve for our balanced classifier. #ROC from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve import matplotlib.pyplot as plt print("sklearn ROC AUC Score A:", roc_auc_score(actual_a, predicted_a)) fpr, tpr, _ = roc_curve(actual_a, predicted_a) plt.figure() plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve') plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--') #center line plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.legend(loc="lower right") plt.show() The AUC for our data is .85, which happens to be the same as our accuracy, which is not often the case (“see: balanced accuracy”). Again, this is a metric that balances the risks of the crew deserting and mutinying given the performance of our device to identify ships carrying treasures, and a ROC curve carved into the table of the navigator’s table could help you make your raiding decisions. Because standard precision and recall rely on binary classification, it is non-trivial to extend AUC to represent a multidimensional general classifier as some sort of hypervolume-under-the-curve. However, several of the metrics are more straightforward to extend to evaluating other types of predictions. Other Output Types As I mentioned earlier, we can perform minor adaptations to these metrics to measure the performance of different types of output. We'll consider the simplest metric, accuracy, for both non-binary categorical output and continuous output. In the following examples, an updated version of the device, version B, tells you if a ship has no treasure (0), some treasure (1), or tons of treasure (2). Version C of the device tells you how many islands you can buy with the treasure on the target ship. For general categorical output, accuracy is very straightforward: correct_predictions/all_predictions. Using an example output with three categories, we can still determine the accuracy. In code, this looks like the following. # Accuracy for non-binary predictions def my_general_accuracy_score(actual, predicted): correct = len([a for a, p in zip(actual, predicted) if a == p]) wrong = len([a for a, p in zip(actual, predicted) if a != p]) return correct / (correct + wrong) print("my Accuracy B:", my_general_accuracy_score(actual_b, predicted_b)) print("sklearn Accuracy B:", accuracy_score(actual_b, predicted_b)) As you may "recall," precision and recall measure false positives and negatives. With a bit of intuition and domain knowledge, we can extend this to a general classifier. In example B, I decided that "2" represents a positive and was able to generate precision as follows. def my_general_precision_score(actual, predicted, value): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == value]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == value]) return true_positives / (true_positives + false_positives) print("my Precision B:", my_general_precision_score(actual_b, predicted_b, 2)) While sklearn supports accuracy for general categorical predictions, we can add a threshold parameter to calculate accuracy for a continuous prediction. Choosing the threshold is as important as every other number that you set during the modeling process, and should be set based on your domain knowledge before you see the results. After applying the threshold, the predictions can be treated a binary classifier and any of the seven metrics we have covered now apply to the data. # Accuracy for continuous with threshold def my_threshold_accuracy_score(actual, predicted, threshold): a = [0 if x >= threshold else 1 for x in actual] p = [0 if x >= threshold else 1 for x in predicted] return my_accuracy_score(a, p) print("my Accuracy C:", my_threshold_accuracy_score(actual_c, predicted_c, 5)) Departing from the standard implementations gives us room to expand these fundamental metrics to cover most predictions, allowing for consistent comparison between models and their outputs. Conclusion These seven metrics for (binary) classification and continuous output with a threshold will serve you well for most data sets and modeling techniques. For the rest, minimal adjustments can create strong metrics. A single note of caution before we discuss adapting these standard measurements: always determine your evaluation criteria before beginning to evaluate the results. There are many subtle issues in a modeling process that can lead to overfitting and bad models, but adjusting the correctness evaluation metric based on the results of the model is an egregious departure from the accepted principles of a modeling workflow and will almost certainly promote overfitting and other bad results. Remember, accuracy is not the goal, a good model is the goal. That goal is just a corollary of Goodhart’s law, or the idea that “when a measure becomes a target, it ceases to be a good measure..” Especially when you’re developing new systems, optimizations for individual metrics can hide overarching issues in the system. Rachel Thomas writes more about this, saying “I am not opposed to metrics; I am alarmed about the harms caused when metrics are overemphasized, a phenomenon that we see frequently with AI, and which is having a negative, real-world impact.” There are extensions of classification that permit interesting modification to correctness metrics. For example, ordinal classification involves an output set where there are a fixed number of distinct categories, but those categories have a set order. Military rank is one type of ordinal data. Sometimes, you can handle ordinal data like continuous data and establish a threshold, then use a binary algorithm to handle the correctness. If a lieutenant in an army wanted to know if soldiers in a dataset were predicted to be her rank and above, she could set the cutoff at lieutenant and use a standard metric like accuracy or precision to evaluate the correctness of her prediction method. However, a more generalized version of the same evaluation could use weighted accuracy to check the results. If a soldier is predicted to be a captain but he is in fact a sergeant, that is more incorrect than if he were predicted to be a lieutenant. Such disparity can be recognized in a custom implementation of accuracy or any other metric as appropriate for the domain, or by adding a penalty function to the loss function in the model. Ultimately, I think this is what makes data science so interesting, there are opportunities to create custom solutions from the beginning to the end of the modeling process. However, the more non-standard the data and algorithm used, the more important it is to consider standard, fundamental metrics like accuracy, precision, and recall for evaluating the results. By using or adapting these metrics you can have confidence that your novel approach to a problem is correct with respect to standard practices.
{}
# How to create an alternative to shortcut "= or \hyp{}? After this question answer How to use the standard - (hyphen) as the \hyp{} command from the hyphenat package?, I decided to not use that hack. Then, I am looking for an alternative to that. On my case, using "= is almost as hard as \hyp{} because I write latex code on Sublime Text, and I use this package https://github.com/r-stein/sublime-text-latex-smart-quotes which does this: i.e., when I press " is sends and '' according to where I am on the word: https://github.com/r-stein/sublime-text-latex-smart-quotes/issues/4 The only way to send " is by pressing Ctrl+L, L, ", which is as hard as have to type \hyp{}. I love this feature of because I never have to use " unless I would like to do "=, but I prefer to keep the https://github.com/r-stein/sublime-text-latex-smart-quotes feature and use something else other than "=. Can I bind it to something as handly as ´= (not =)? Or do you suggest something else? Then, I can use it like this something´=hyphenated, instead of something"=hyphenated I tried doing this: \newcommand{´=}{\hyp{}} but latex did not liked it: test1.tex: LaTeX Error: Missing \begin{document}. # Update 1 I found this question How to hyphenate a reference that has a lastname with a hyphen? suggesting this, but it had no effect: \documentclass[10pt,a5paper,twoside]{article} \usepackage{hyphenat} \usepackage[english]{babel} \defineshorthand{´=}{\hyp{}} \begin{document} \section{Show font} Tests. Encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding. Encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding. Encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding. \end{document} # Update 2 After reading http://linorg.usp.br/CTAN/macros/latex/required/babel/base/babel.pdf I managed to get this working: \documentclass[10pt,a5paper,twoside]{article} \usepackage{hyphenat} \usepackage[english]{babel} \useshorthands{"} \defineshorthand{"=}{\hyp{}} \begin{document} \section{Show font} Tests. Encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding. Encoding"=encoding"=encoding"=encoding"=encoding"=encoding"=encoding"=encoding"=encoding"=encoding. Encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding´=encoding. Encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding. \end{document} But when using something like: \useshorthands{´} \defineshorthand{´=}{\hyp{}} Latex throws this error: Package babel Info: Making an active character on input line 5. test1.tex:5: LaTeX Error: Missing \begin{document}. • What comes after newcommand must be a macro. Hence the error. Have you considered using the package csquotes instead of using that plugin? – Weijun Zhou Apr 28 '19 at 2:04 • I did not have considered that package. I do know why, but I already have that package included on my thesis template, but seems not to be doing anything useful. Then, I think I will keep using Sublime Text with thing'' – user Apr 28 '19 at 2:11 • I am using it with \MakeOuterQuote{"} and I can type all the double quotes without problem. They are automatically matched and replaced with the correct (opening or closing) one. I seldom use single quotes but there is something similar in the document. – Weijun Zhou Apr 28 '19 at 2:12 • Check page 12 of the babel manual. You need \useshorthands* before you define a shorthand. – Weijun Zhou Apr 28 '19 at 3:40 • ´ is not an ascii char, it is U+B4, which in utf8 is encoded with two bytes (0xC2 0xB4). You can't use it for a shorthand. – Ulrike Fischer Apr 28 '19 at 10:00 1. I managed to create this shorthand with !-, but it is breaking words after ! alone. For example, This is! Sparta. will show as This is!Sparta. 2. Then, instead of using !-, I think I will be using $- if this does not break anything else. This is breaking Latex text editor syntax parsing. They are thinking we are on math mode after using some word$-thing. 3. Finally, using ~- does not break anything because even when using ~ alone, it works as indented It breaks when using ~--~ dashes. 4. Maybe finally, using ~= does not break anything because even when using ~ alone, it works as indented: \documentclass[10pt,a5paper,twoside]{article} \usepackage{hyphenat} \usepackage[english]{babel} \useshorthands{~} \defineshorthand{~-}{\hyp{}} \begin{document} \section{Show font} Tests. Encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding-encoding. Testing~This motherfoer1. Testing~ This motherfoer2. Encoding~-encoding~-encoding~-encoding~-encoding~-encoding~-encoding~-encoding~-encoding~-encoding. Encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding\hyp{}encoding. \end{document} --> See also these other question about the problems the ~= can cause: • As has been said in other comments, you need to use a basic ASCII character (ASCII code 0~126) as the prefix. The prefix you want to use simply won't work. – Weijun Zhou Apr 28 '19 at 12:12
{}
# How to render a formula in WPF or WinForms I need to have a way to draw a mathematical formula in Windows Forms or WPF. Some "FormulaTextBox" control would be exactly what I need. I'm not asking about full implementation of LaTEX, but at least something beyond RichTextBox possibilities, with division lines, square roots, etc. P.S. C# is so powerful. A way to draw formulas with C# (.NET) should be with us! ## Solutions have been found: 1. This solution uses MathTex library (MimeTex in the past) to generate a gif files from TeX like strings. Cons: • There is no Windows version of the library maintained and tested. The article suggests memory leaks and other issues that could appear because of this. One have to adjust and maintain the windows version by its own. • The general approach has unnatural design for winforms or WPF. While it could fit the web it looks far not so optimal for winforms. • There is much to be done from the article example to a real working control (there is always something to be done but I would appreciate a more ready-to-go solution) - this article looks like it could help –  jberger Jan 30 '12 at 16:27 Looks interesting from the first glance. Thank you! I will review it tomorrow and post here. –  MajesticRa Jan 30 '12 at 21:53 Here's a list of options, pulled from several webpages online, as well as a few similar questions on SO • WPF-Math, an (inactive) WPF library for rendering math-related Tex. • gNumerator is a WinForms control that renders MathML. It is native C#, but appears to be quite old. • Math Expressions, a commercial WinForms control for displaying and editing math equations. Note: Not free • There's an unofficial port of JMathTex to a C# WPF control • The Windows version of the LaTex editor Lyx uses a native library called MikTex you could take a look at. I saw mention somewhere that the tex4ht package renders math equations to images • MimeTex/MathTex, as you already mentioned • You could also use a WebBrowser control, and just locally include one of many javascript libraries for rendering LaTex. • You could pawn off the work onto Microsoft Word (Example - requires users to have MS Word installed!) - +1 You probably deserve this bountry... I didn't see your WebBrowser control point before I posted my answer. –  Jeremy Thompson Jan 31 '12 at 4:36 Perhaps you can use the Wolfram Alpha API to retrieve the images. - Thank you for a response. But isn't it web-only? How to embed the thing as stadalone winforms control? Could you, please, provide some short hints or code? –  MajesticRa Jan 30 '12 at 9:53 Its a bit of a gamble, but you can just call the API from your winforms app using WebRequest and parse the response as XML to get the right node. Then just set the source of a PictureBox to the url. –  TJHeuvel Jan 30 '12 at 9:56 That's why "standalone" word was used. "using WebRequest" is too much burden for a standalone application. In some rare cases it could be acceptable, I agree. But it is definitely not a general answer for winforms or WPF. –  MajesticRa Jan 30 '12 at 10:14 +1 - even if it is a stand-alone app, a mode to go online and use Wolfram Alpha's API is a good idea. –  Jeremy Thompson Jan 31 '12 at 4:21 "a mode to go online and use Wolfram" - a good joke, yes)) Tell it your clients))) –  MajesticRa Jan 31 '12 at 6:18 If you want something beyond a RichTextBox abilities to render pie, divisions and sqr roots and etc, you could just use WebBrowserControl(s) as textbox's. Then to render formula's you could leverage techniques shown in this webpage, save it to your desktop and open the html file: http://www.wjagray.co.uk/maths/ASCIIMathTutorial.html Obviously you'll need a special syntax (or special calculator buttons) to enter the formula's and I'm guessing from looking at the customisations.js file driving that webpage that you could make your own list of operators and functions too. - Here's what I'd do if none of the .NET specific solutions work for you: It's a bit hacky, but it'll work and your users won't know the difference. Download the Mathjax library. When your user enters in the equation, you could convert the equation to LaTex or MathML. You would then take the LaTex or MathML and generate an HTML file that references Mathjax and display the file in your tiny WebBrowser window (valid for both WinForms and WPF). This shouldn't require an internet connection. Like I said, your users won't be any the wiser. -
{}
Tetration Forum You're currently viewing a stripped down version of our content. View the full version with proper formatting. Is obvius that the mathematical concept of sequence of objects is formalized with the concept of indexed family (inside a set theory) But my question is about who generalized the concept of Hyperoperation from the usual Goodstein 3-ary function (or the Knuth's uparrows) to some indexed families that satisfies some properties. I'm tryng to continue the discussion started in the thread about the dustributive property of Bennet's operation family: (05/27/2014, 08:22 PM)MphLee Wrote: [ -> ] (05/27/2014, 07:45 PM)andydude Wrote: [ -> ]@MphLee Hyperoperations, in the general sense, are any sequence of binary operations that includes addition and multiplication. The commutative hyperoperations satisfy this property because $\exp^0(\ln^0(a) + \ln^0(b)) = a + b$ and $\exp^1(\ln^1(a) + \ln^1(b)) = e^{\ln(a) + \ln(b)} = e^{\ln(a)}e^{\ln(b)} = a \times b$. That formula is the starting point, it is the definition of commutative hyperoperations. The fact that it contains addition and multiplication can be discussed and proved from the definition. I'm even aware that the term Hyperoperations usually means (can be formalized as) an indexed family of binary operations $\{*_i\}_{i \in I}$ whith addition, multiplication and exponentiation belonging to the image of the indexed family (the image of the family is defined to be the image of the set of indexes- set of ranks- via the indicization function). This definition is the one I found on Wikipedia and is very smart even if it cuts the Commutative hyperoperations out of the game (Maybe we can make a weaker concept of Hyperoperations Family without the exponentiation requirement, I would call them Weak Hyperoperations Families)... Anyways I'm very courious...I was not able to find references about this terminology and I did not even find who introduced this formal definition. Who actually gave the first definition of when an indexed family of binary operations is an Hyperoperations family? I need the reference because I made some improvement in the definition while writing a paper about the Hyperoperations. First of all, I believe I wrote most of the Hyperoperations article on Wikipedia. There was an existing article called "Hyper operator", but there were many people on the Talk sub-page that led me to believe that the page needed a lot of work, so I tried to do my best with the rewrite. I included every reference that I could find on the topic, and compiled what I think is a comprehensive list of references that doesn't focus on Tetration, but all hyperoperations in general. (07/17/2014, 05:29 PM)andydude Wrote: [ -> ]First of all, I believe I wrote most of the Hyperoperations article on Wikipedia. There was an existing article called "Hyper operator", but there were many people on the Talk sub-page that led me to believe that the page needed a lot of work, so I tried to do my best with the rewrite. I included every reference that I could find on the topic, and compiled what I think is a comprehensive list of references that doesn't focus on Tetration, but all hyperoperations in general. Speaking of Wikipedia, has anyone else noticed that the "linear" approximation for tetration that is listed there is the naive version on the interval [-1,0], which is only C1 continuous for base e? I'm pretty sure we've discussed a better linear approximation somewhere, which is linear on a base-specific unit interval, which is C1 continuous, not just C0 continuous. I stumbled upon it way back in the day: I see the same issue with the wikepedia article for the superlogarithm: http://en.wikipedia.org/wiki/Super-logar...roximation In fact, that article explicitly calls out that it's C0 continuous, when a proper choice of interval would be C1 continuous. I found a post here on the forum where I discussed the C1 linear approximation of the slog: http://math.eretrandre.org/tetrationforu...php?tid=98 Granted, these methods only work for real bases greater than eta*. However, I still think they're far more useful than the naive linear approximation. * Real, because you need to be able to place e in a particular unit interval, based on iterated logarithms. Greater than eta, because, as I mentioned elsewhere, the linear approximation for bases less than eta is only valid between the primary fixed points, not between 0 and the lower fixed point. (07/17/2014, 05:29 PM)andydude Wrote: [ -> ]First of all, I believe I wrote most of the Hyperoperations article on Wikipedia. There was an existing article called "Hyper operator", but there were many people on the Talk sub-page that led me to believe that the page needed a lot of work, so I tried to do my best with the rewrite. I included every reference that I could find on the topic, and compiled what I think is a comprehensive list of references that doesn't focus on Tetration, but all hyperoperations in general.
{}
5 # Abii 25 Q1) At a football stadium a goalkeeper kicks a football at initial velocity of 16 mls in the vertical direction and 12 mls in horizontal direction (a) At wh... ## Question ###### Abii 25 Q1) At a football stadium a goalkeeper kicks a football at initial velocity of 16 mls in the vertical direction and 12 mls in horizontal direction (a) At what speed does the ball hits the ground? (b) What is the maximum height that ball can reach? (c) How long does the ball ?remain in the airJill abii 25 Q1) At a football stadium a goalkeeper kicks a football at initial velocity of 16 mls in the vertical direction and 12 mls in horizontal direction (a) At what speed does the ball hits the ground? (b) What is the maximum height that ball can reach? (c) How long does the ball ?remain in the air Jill #### Similar Solved Questions ##### Given that f(x) is an odd function and fe(x)dx =3 and f3 €(x)dx =15 _ use the properties of the definite integral to find the followings. Justify your answer: fS c (xJdxb) f, f(x)dx Given that f(x) is an odd function and fe(x)dx =3 and f3 €(x)dx =15 _ use the properties of the definite integral to find the followings. Justify your answer: fS c (xJdx b) f, f(x)dx... ##### Compute the derivative Df (1,-1,1) for f (X, Y, 2) = (212y -,~S1y cos(xyz)) Compute the derivative Df (1,-1,1) for f (X, Y, 2) = (212y -, ~S1y cos(xyz))... ##### Wnat volume (in liters) of a 1.5 M FeCly solution needed to provide 4.0 moles of chloride (CI-)?KBr solution made by diluting 75.0 mL ofa 0.300 M solution to final volume of 100What is the molarity Wnat volume (in liters) of a 1.5 M FeCly solution needed to provide 4.0 moles of chloride (CI-)? KBr solution made by diluting 75.0 mL ofa 0.300 M solution to final volume of 100 What is the molarity... ##### Chlorination of alkanes can produce a multitude of products. Determine the number of monochlorinated and dichlorinated products that can be obtained by chlorination of 3-methylpentane. (Consider constitutional isomers only. Do not count stereoisomers: _Number of monochlorinated products (CeH1Cl)Number of dichlorinated products (CeH12Clz)H3CCH3NumberNumberCH33-methylpentane Chlorination of alkanes can produce a multitude of products. Determine the number of monochlorinated and dichlorinated products that can be obtained by chlorination of 3-methylpentane. (Consider constitutional isomers only. Do not count stereoisomers: _ Number of monochlorinated products (CeH1Cl) Nu... ##### (11 points Answer the following questions about the sphere whose equation given by2? +y? + 22 62 + 8y = 24Find the radius of the sphere.Radius2. Find the center of the sphere. Write the center as point (@,b,c) where @, b,and c are numbersCenter: (11 points Answer the following questions about the sphere whose equation given by 2? +y? + 22 62 + 8y = 24 Find the radius of the sphere. Radius 2. Find the center of the sphere. Write the center as point (@,b,c) where @, b,and c are numbers Center:... ##### Problen 6: Let f(z) be Write the function as periodic function with period 2z such that: trigonoretric series, f(c) =-ri-<I<#f(s) =c+(an cos(nz) + b_ sin(nz))Determine the Fourier coefficients €, @n,bn- Problen 6: Let f(z) be Write the function as periodic function with period 2z such that: trigonoretric series, f(c) =-ri-<I<# f(s) =c+ (an cos(nz) + b_ sin(nz)) Determine the Fourier coefficients €, @n,bn-... ##### 2) Differentiate h(x) x3 _ 4x2 + 2 Show all steps. Give answer with positive exponents. 2) Differentiate h(x) x3 _ 4x2 + 2 Show all steps. Give answer with positive exponents.... ##### Use the Henderson-Hasselbalch equation to explain how the $\mathrm{pH}$ of a buffer solution based on a weak acid and its conjugate base changes (a) when the ionization constant of the weak acid increases and (b) when the acid concentration is decreased relative to the concentration of its conjugate base. Use the Henderson-Hasselbalch equation to explain how the $\mathrm{pH}$ of a buffer solution based on a weak acid and its conjugate base changes (a) when the ionization constant of the weak acid increases and (b) when the acid concentration is decreased relative to the concentration of its conjugate... ##### Cxeston 18Kf()=& + 1x22eod f(0)then f(2) equaki+1334+428 4-1Oe 4 _%2"08*+4_0-307+35-1 Cxeston 18 Kf()=& + 1x22 eod f(0) then f(2) equaki +133 4+428 4-1 Oe 4 _%2" 08*+4_0-3 07+35-1... ##### Use this Iist of Basic Taylor Serles fInd the Taylor Serles for f(x) based at 6x-9 5x-6 button In CalcPad or type "Infinity" in all lower-case_give the Interval on which the series converges_ (If vou needenteruse theThe Taylor series for f(x) 67_05x-6The Taylor serles converges to f(x) for Ix < Use this Iist of Basic Taylor Serles fInd the Taylor Serles for f(x) based at 6x-9 5x-6 button In CalcPad or type "Infinity" in all lower-case_ give the Interval on which the series converges_ (If vou need enter use the The Taylor series for f(x) 67_0 5x-6 The Taylor serles converges to f(... ##### Find the $x$ - and $y$ -intercepts and graph the ellipse. (GRAPH CAN'T COPY).$$rac{x^{2}}{36}+ rac{y^{2}}{4}=1$$ Find the $x$ - and $y$ -intercepts and graph the ellipse. (GRAPH CAN'T COPY). $$\frac{x^{2}}{36}+\frac{y^{2}}{4}=1$$... ##### In a recent survey 33% of respondents stated that they work inthe construction industry, 28% stated that they had suffered aworkplace injury and 10% stated that they work in the constructionindustry and have suffered a workplace injury. What percentage ofworkers have suffered an accident given that they work in theconstruction industry?A.30 %B.36%C.61%D.51% In a recent survey 33% of respondents stated that they work in the construction industry, 28% stated that they had suffered a workplace injury and 10% stated that they work in the construction industry and have suffered a workplace injury. What percentage of workers have suffered an accident given t... ##### Are lone pairs more electronegative? Are lone pairs more electronegative?... ##### 1. Pick the right choices and complete thepassagesThe hydrolytic enzymes of the lysosome have the following tag:oligosaccharides or Mannos-6-phosphate .Besides the endocytic pathway that leads ingested material to thelysosomes, another pathway that help in the digestion ofinternal compartments in a cell is acidic digestion/regulated release/ autophagy. When receptors brought intothe endosomes through endocytosis are not returned to the cellsurface or destroyed in the lysosomes, they undergo a pr 1. Pick the right choices and complete the passages The hydrolytic enzymes of the lysosome have the following tag: oligosaccharides or Mannos-6-phosphate . Besides the endocytic pathway that leads ingested material to the lysosomes, another pathway that help in the digestion of internal compartments... ##### Let Yi, Y> Yj be independent and identically distributed continuous distributions with mean= 4o Derive the following:(a) P( Y() > Ho ) (6) IfYi, Yz Yf are also symmetric distributions with mean (4o)= median simplify your answrer to your answer above Let Yi, Y> Yj be independent and identically distributed continuous distributions with mean= 4o Derive the following: (a) P( Y() > Ho ) (6) IfYi, Yz Yf are also symmetric distributions with mean (4o)= median simplify your answrer to your answer above... ##### Compute 1 lim J & ~ 6t + 5 dt_ x-5 (x _ 5)2 Compute 1 lim J & ~ 6t + 5 dt_ x-5 (x _ 5)2...
{}
## Wednesday, 1 May 2013 ### The Magic Angle I was looking at some electrodynamics before I go into modelling waveguides. At the same time I was looking into solid state NMR and also polarisation conditions in our laser system at the lab; both these techniques make use of the magic angle in order to remove dipole effects. ## What is a dipole? Dipoles appear in many different parts of nature. The simplest to think about is a magnet. By putting iron filing around a bar magnet the so-called field lines (lines of equal potential/magnetism) can be seen. Bar magnet under paper with iron filing showing the field lines As a convention we draw lines with arrows from the north to the south pole. The arrows denote the vector (vectors have direction and magnitude i.e. velocity). Representation of a dipole Dipoles also appear between charged particles such as a positive nucleus of an atom and the electron, where the arrows go from positive to negative by convention. ## Physics of a dipole A good explanation of how you can work out the field from a dipole can be seen here. Briefly, you can imagine two charged particles a set distance apart. You can work out the fields from simple electrostatics as the sum of the electric field from two points' charges and then as they come close together you end up with a point source dipole. This also works for magnetism, however as Maxwell's equations tell us magnetic flux cannot be created or come from a source so there is no divergence in the field. Mathematically it looks like this: $\nabla \cdot B = 0$ But basically it means that if I were to draw a box around the dipole, the number of field lines going in (flux in) would equal the number of field lines going out (flux out). Thinking about two nuclei close to each other, there will be an addition or subtraction to the overall magnetic field from the magnetic field of the other nuclei. In order to remove this we can look at the z term of the magnetic field (in this case along the axis of the dipole). $B_{z}=\frac{|\mu|}{r^{3}}(3cos^{2}\theta -1)$ We are wanting to see when this contribution goes to zero. Setting this to zero, we see that when the z component of the magnetic field is zero, giving an angle of 54.7 degrees. This is what is called the magic angle. ## Modelling a dipole To illustrate this I have a small script I wrote that plots the field lines from a magnet and then integrates a path (in green) where a particle would travel if influenced by the field. The line (in blue) drawn is at 54.7 degrees. The plot is a quiver plot with the arrows indicating the field of the magnetic field at that point. You can see where the blue line at 54.7 degrees intersects the arrows that the z component is zero. from pylab import * from numpy import ma import numpy as np import matplotlib.pyplot as plt import matplotlib from matplotlib.collections import PatchCollection import matplotlib.patches as mpatches from scipy.integrate import odeint #Size of simulation box xmax=3.0 xmin=-xmax NX=19 zmax=3.2 zmin=-zmax NZ=19 # Setting up the grid x=linspace(xmin,xmax,NX) z=linspace(zmin,zmax,NZ) X, Z = meshgrid(x, z) #Function that describes the vector field for the integrator def f(Y,t): X, Z = Y R = np.sqrt(X**2 + Z**2) return((0.5*(3*X*Z/R**5)),(0.5*(3*Z**2/R**5 - 1/R**3))) #Applying the field to the gridded values R = np.sqrt(X**2 + Z**2) Bx = 0.5*(3*X*Z/R**5) Bz = 0.5*(3*Z**2/R**5 - 1/R**3) #Had to mask some values so that you don't get large arrows right in the middle M = zeros((X.shape[0],Z.shape[1]), dtype='bool') a, b = (NX/2), (NZ/2) r=3.1 E,W = np.ogrid[-a:NX-a, -b:NZ-b] print E.shape, W.shape, mask = E*E + W*W <= r*r #Magic angle line line = 0.7*x up = z*0 #Setting up plot fig = plt.figure() #Integrating paths for z20 in [-1.0, 1.0]: tspan = np.linspace(0, 62, 100) z0 = [z20, 0.9] zs = odeint(f, z0, tspan) plt.plot(zs[:,0],zs[:,1], 'g-') # path #Box and arrow arr1 = matplotlib.patches.Arrow(0, -0.5, 0, 1, width=0.4) rect1 = matplotlib.patches.Rectangle((-0.4,-1),0.8, 2, color='lightblue') #Plotting plot(x,line,color='blue') plot(up,z,color='red') a = title("Magic angle for a dipole") plt.text(0, 2, "$B=sin \Theta cos \Theta \hat{x} + (cos^{2} \Theta - 1/3)\hat{z}$",size='large') savefig('dipole.png',dpi=300) I also tried out the new streamlines plot in matplotlib. Field lines of a magnet (red) with line drawn at the magic angle (blue) seen intersecting the field lines exactly where the z component (upwards component) of the field is zero. #Setting up new range for streamlines xmax=3.0 xmin=-xmax NX=100 zmax=3.2 zmin=-zmax NZ=100 x=linspace(xmin,xmax,NX) z=linspace(zmin,zmax,NZ) X, Z = meshgrid(x, z) #Function for the vector field R = np.sqrt(X**2 + Z**2) Bx = 0.5*(3*X*Z/R**5) Bz = 0.5*(3*Z**2/R**5 - 1/R**3) fig = plt.figure() #New streamplot in matplotlib QS = streamplot(X,Z,Bx,Bz,density=[1.3,1.3],linewidth=1,color='red',minlength=0.3) arr1 = matplotlib.patches.Arrow(0, -0.5, 0, 1, width=0.4) rect1 = matplotlib.patches.Rectangle((-0.4,-1),0.8, 2, color='lightblue') plot(x,line,color='blue',) plot(up,z,color='green',linewidth=1.5) a = title("Magic angle for a dipole potential") plt.text(0.2, 2.8, "$B=sin \Theta cos \Theta \hat{x} + (cos^{2} \Theta - 1/3)\hat{z}$",size='large') savefig('dipole_streamlines.png',dpi=300) We we see is that at the magic angle we remove all of the effects of the magnetic field in the z direction on the particle. In NMR they use rotors that orientate the crystal at the magic angle relative to the magnet field. With the spinning removing any directional effects. This lead to a nice liquid-like NMR peak. http://en.wikipedia.org/wiki/Magic_angle_spinning In the Photon Factory we use a pump-probe technique to excite molecules and then see how they decay with time. We polarise the probe light (polarisation is the direction of the electric field) so that it is at the magic angle relative to the pump. This removes any polarisation dependencies from the two incoming beams with the sample (interesting measurements changing the polarisation of the pump and probe. This anisotropy can be used to infer structure of the molecule and other interesting properties). Menzel R., "Photonics: Linear and Nonlinear Interactions of Laser Light and Matter" References http://bulldog2.redlands.edu/facultyfolder/deweerd/tutorials/Tutorial-QuiverPlot.pdf Field of a small dipole http://www.physicsinsights.org/dipole_field_1.html 1. Just thought I'd mention that your "divergence of B" looks more like the gradient of B -- that is, you forgot the dot. In LaTeX it should be $\nabla \cdot B$.
{}
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnline Submission ㆍMy Manuscript - For Reviewers - For Editors A simple proof of the $p$-adic version of the Sobolev embedding theorem Commun. Korean Math. Soc. 2010 Vol. 25, No. 1, 27-36 https://doi.org/10.4134/CKMS.2010.25.1.27Printed March 1, 2010 Yong-Cheol Kim Korea University Abstract : We give a simple proof of certain mapping properties of the $p$-adic Riesz potential and Bessel potential, and the $p$-adic version of the Sobolev embedding theorem obtained in [6]. Keywords : $p$-adic vector space, the $p$-adic Riesz and Bessel potential, Sobolev embedding theorem MSC numbers : 11S80, 11K70, 11E95 Downloads: Full-text PDF Copyright © Korean Mathematical Society. The Korea Science Technology Center (Rm. 411), 22, Teheran-ro 7-gil, Gangnam-gu, Seoul 06130, Korea Tel: 82-2-565-0361  | Fax: 82-2-565-0364  | E-mail: paper@kms.or.kr   | Powered by INFOrang Co., Ltd
{}
Home > Standard Error > Calculate Variance From Standard Error # Calculate Variance From Standard Error ## Contents Let's plot this on the chart: Now we calculate each dog's difference from the Mean: To calculate the Variance, take each difference, square it, and then average the result: So the They report that, in a sample of 400 patients, the new drug lowers cholesterol by an average of 20 units (mg/dL). The true standard error of the mean, using σ = 9.27, is σ x ¯   = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt If your data are normally distributed, around 67% of your results should fall within your mean, plus or minus your standard deviation, and 95% of your results should fall within two Source Download a free trial here. See unbiased estimation of standard deviation for further discussion. When you have "N" data values that are: The Population: divide by N when calculating Variance (like we did) A Sample: divide by N-1 when calculating Variance All other calculations stay What is alluded to by "In general, σ2 is not known, but can be estimated from the data. ## Calculate Standard Deviation Standard Error The sample mean will very rarely be equal to the population mean. So, when drawing a finite sample from a population, the variance has to be estimated. Because the age of the runners have a larger standard deviation (9.27 years) than does the age at first marriage (4.72 years), the standard error of the mean is larger for How much should I adjust the CR of encounters to compensate for PCs having very little GP? This is not the case when there are extreme values in a distribution or when the distribution is skewed, in these situations interquartile range or semi-interquartile are preferred measures of spread. The confidence interval of 18 to 22 is a quantitative measure of the uncertainty – the possible difference between the true average effect of the drug and the estimate of 20mg/dL. Sample Variance Standard Error Notice that s x ¯   = s n {\displaystyle {\text{s}}_{\bar {x}}\ ={\frac {s}{\sqrt {n}}}} is only an estimate of the true standard error, σ x ¯   = σ n Retrieved 17 July 2014. Calculate Mean Standard Error Interquartile range is the difference between the 25th and 75th centiles. The simplest estimate would be to calculate the observed variance in the sample, and use this as the best estimate of the true variance within the population. http://www.statsdirect.com/help/basic_descriptive_statistics/standard_deviation.htm Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. It will be shown that the standard deviation of all possible sample means of size n=16 is equal to the population standard deviation, σ, divided by the square root of the Variance And Standard Error Of Ols Estimators This step assumes that the standard error is a known quantity. The distribution of the mean age in all possible samples is called the sampling distribution of the mean. Referenced on Wolfram|Alpha: Standard Error CITE THIS AS: Weisstein, Eric W. "Standard Error." From MathWorld--A Wolfram Web Resource. ## Calculate Mean Standard Error The standard deviation for each group is obtained by dividing the length of the confidence interval by 3.92, and then multiplying by the square root of the sample size: For 90% http://ncalculators.com/math-worksheets/calculate-standard-deviation-standard-error.htm How to Calculate the RMSE or Root Mean Squared Error; The... Calculate Standard Deviation Standard Error The distribution of these 20,000 sample means indicate how far the mean of a sample may be from the true population mean. Calculate Variance Standard Deviation Perspect Clin Res. 3 (3): 113–116. For moderate sample sizes (say between 60 and 100 in each group), either a t distribution or a standard normal distribution may have been used. http://galaxynote7i.com/standard-error/calculate-the-standard-error-of-m.php The standard deviation of all possible sample means of size 16 is the standard error. CRC Standard Mathematical Tables and Formulae. The graphs below show the sampling distribution of the mean for samples of size 4, 9, and 25. Calculate Standard Error From Variance Covariance Matrix Next, consider all possible samples of 16 runners from the population of 9,732 runners. Student approximation when σ value is unknown Further information: Student's t-distribution §Confidence intervals In many practical applications, the true value of σ is unknown. the negatives cancel the positives: 4 + 4 − 4 − 44 = 0 So that won't work. http://galaxynote7i.com/standard-error/calculate-standard-error-of-the-mean-from-standard-deviation.php If the sample size is large (say bigger than 100 in each group), the 95% confidence interval is 3.92 standard errors wide (3.92 = 2 × 1.96). Arguments for the golden ratio making things more aesthetically pleasing How can i know the length of each part of the arrow and what their full length? Variance And Standard Error Formula How to Calculate Standard Errors How to Calculate Variance From Standard Error. v t e Statistics Outline Index Descriptive statistics Continuous data Center Mean arithmetic geometric harmonic Median Mode Dispersion Variance Standard deviation Coefficient of variation Percentile Range Interquartile range Shape Moments ## The standard deviation of the age for the 16 runners is 10.23, which is somewhat greater than the true population standard deviation σ = 9.27 years. Are there any saltwater rivers on Earth? Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; and Vetterling, W.T. Substitute $\frac{RSS}{N-2}$ into the equation for SE$(\hat{\beta_1})^2$ and you will get the values in ISL. Variance And Standard Error Relationship Because these 16 runners are a sample from the population of 9,732 runners, 37.25 is the sample mean, and 10.23 is the sample standard deviation, s. When distributions are approximately normal, SD is a better measure of spread because it is less susceptible to sampling fluctuation than (semi-)interquartile range. The standard error of an estimate may also be defined as the square root of the estimated error variance of the quantity, (Kenney and Keeping 1951, p.187; Zwillinger 1995, p.626). Copy (only copy, not cutting) in Nano? http://galaxynote7i.com/standard-error/calculate-standard-error-of-mean-from-standard-deviation.php T-distributions are slightly different from Gaussian, and vary depending on the size of the sample. For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. Review authors should look for evidence of which one, and might use a t distribution if in doubt. As will be shown, the standard error is the standard deviation of the sampling distribution. The standard error of the mean (SEM) (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means over all If the spread of your data is close to the mean, the standard deviation will be small and vice versa. The proportion or the mean is calculated using the sample. Hutchinson, Essentials of statistical methods in 41 pages ^ Gurland, J; Tripathi RC (1971). "A simple approximation for unbiased estimation of the standard deviation". Calculate the t-value. A natural way to describe the variation of these sample means around the true population mean is the standard deviation of the distribution of the sample means. Wolfram Problem Generator» Unlimited random practice problems and answers with built-in Step-by-step solutions. Unexplained variance is a term used in analysis... Note that while this definition makes no reference to a normal distribution, many uses of this quantity implicitly assume such a distribution. If this is not the case, the confidence interval may have been calculated on transformed values (see Section 7.7.3.4). Sampling from a distribution with a large standard deviation The first data set consists of the ages of 9,732 women who completed the 2012 Cherry Blossom run, a 10-mile race held How to Find Standard Deviation on a TI 84 Plus Variance and standard deviation are statistical tests that can be performed to explain the data found in a certain ...
{}
# Convert infix regular expression notation to postfix This is a small part of a larger program for implementing a limited syntax regular expression constructor using Ken Thompson's construction algorithm. Converting to postfix before the regular expression is processed makes the processing vastly simpler because everything can be smoothly read and processed left to right. The following algorithm for performing the conversion works in a shunting-yard like manner where an operator stack is used to determine when operators should be sent to the output string. ### Conversion Function: typedef struct _conv_ret { char *re; int err; } conv_ret; conv_ret conv(char *re) { /* converts limited regex infix notation with explicit * catenation denoted by '.' to postfix in a shunting-yard manner */ conv_ret ret = {NULL, REGEX_TOOLARGE}; if(strlen(re) > MAX_LINE) return ret; static char buf[MAX_LINE]; char *bufp = buf; ret.re = buf; ret.err = 0; /* operator stack */ int bp[strlen(re)]; int *sp = bp; #define OP_NUM 6 /* placeholder for id 0 */ char id_map[OP_NUM+1] = {' ', '(', '|', '.', '?', '+', '*'}; int prec_map[OP_NUM+1] = {0, 1, 2, 3, 4, 4, 4}; #define push(id) *++sp = id #define pop() *bufp = id_map[*sp--]; bufp++ for(; *re; re++) { /* loop skips open paren (id 1) because it is only there * as a placeholder until the closing paren is pushed */ for(int id = 2; id < OP_NUM+1; id++) { /* pop until incoming op is * highest precedence on stack */ if(id_map[id] == *re) { if(sp > bp) { while(prec_map[id] <= prec_map[*sp]) { pop(); } } push(id); goto RELOOP; } } switch(*re) { case '(': push(1); goto RELOOP; case ')': while(*sp != 1) { /* couldn't find matching paren. send error */ if(sp == bp) { ret.re = NULL; ret.err = PAREN_MISMATCH; return ret; } pop(); } /* pop without sending paren to buf */ --sp; goto RELOOP; default: /* send non op to buf */ *bufp = *re; bufp++; } RELOOP: ; } /* pop all leftover values in stack to buf */ while(sp > bp) { /* error if unmatched open paren */ if(*sp == 1) { ret.re = NULL; ret.err = PAREN_MISMATCH; return ret; } pop(); } /* null terminate */ *bufp = 0; return ret; } #include <string.h> #define MAX_LINE 10000 /* error codes */ #define REGEX_TOOLARGE 1 #define PAREN_MISMATCH 2 Note: Further errors are caught in later stages of parsing within the program, but this post is just about the postfix conversion and the conversion itself is not meant to do a whole lot of syntactic and semantic parsing. ### Examples: a+a -> aa+ a+a* -> aa+* a.(a+b)*.b -> aab+*.b. a.(a+b)*.b() -> aab+*.b. a.(a+b)*.b) -> PAREN_MISMATCH a.(a+b)*.b( -> PAREN_MISMATCH Any criticisms aimed towards improving the efficiency and readability of this code would be greatly appreciated. • It would be helpful for reviewers to have the missing #define information as well as any header files used for the code. It would also be helpful if you added any test cases (programs and data) you wrote for this code. Sep 4 '20 at 12:55 – user228607 Sep 4 '20 at 13:28 • Please do not modify the question after an answer has been posted, especially please do not modify the code, everyone has to see the code as reviewed help center. Sep 4 '20 at 15:45 • Please see the rules I pointed at. Sep 4 '20 at 16:09 • You can post a follow up question with your changes and link back to this one. Sep 5 '20 at 3:14 ## General Observations It is difficult to accurately define any bottle necks when only one function is presented. The brief moment when main() and match() were visible was very helpful, although it would have been nice if the body of match() was included as well. It might be better to use a power of 2 (1024, 2048, ...) for MAX_LINE rather than a round number like 10000. The code is overly complex and should be broken into multiple functions, this is actually proved by the multiple goto RELOOP; statements. These goto statements can be replaced by break; and continue and in one case by the return of a function. Try to avoid writing Spaghetti code. ## Implement Stacks using Structs It is much easier to maintain code when the stack pointer and the stack container (array) can be found in one place. Rather than write push and pop as macros, implement them as functions that take a stack struct, and in the case of push the parameter of what is being pushed on the stack. ## Magic Numbers While there are symbolic constants used rather than number constants in some parts of the code, this could be improved, it is also possible to use enums rather than #define to define symbolic constants in C and I would recommend using enums to represent the error ids because it is expandable. typedef enum Error_Code { REGEX_TOOLARGE = 1, PAREN_MISMATCH = 2 } Error_Code; Just a quick though here, if the error codes start at 0 rather than 1 than any error messages could be stored as an array of strings. The place where there are still magic numbers is in this code: int prec_map[OP_NUM] = { 1, 2, 3, 4, 4, 4 }; It isn't clear what any of those numbers mean. It isn't clear that OP_NUM is necessary because the count can be determined by either one of the following: char id_map[] = { '(', '|', '.', '?', '+', '*' }; const size_t OP_NUM = sizeof(id_map)/sizeof(*id_map); or int prec_map[] = { 1, 2, 3, 4, 4, 4 }; const size_t OP_NUM = sizeof(prec_map)/sizeof(*prec_map); Numeric constants in code are sometimes referred to as Magic Numbers, because there is no obvious meaning for them. ## Posible Optimization Use strlen() only once and store the value in a variable. Avoid function-like macros. They are sooo seventyish, and they may seriously reduce the readability of the code. In this particular case it took me a while to realize that while(sp > bp) { /* error if unmatched open paren */ if(*sp == 1) { ret.re = NULL; ret.err = PAREN_MISMATCH; return ret; } pop(); } is not an infinite loop. Looking at just this snippet, it is not possible to see that sp does change. The fact that it is decremented is hidden in pop(), and very hidden it is. Use an inline function, and trust the compiler to produce an identical code. The compilers are very good in optimization these days. The inner loop over ids does not look pretty. The nesting is too deep. Factor out important functions. First, the real job is done only when id_map[id] == *re. It means int id = find_id(*re); if (id != INVALID_ID) { do_the_job; } gotos are not called for. Those inside switch are absolutely unnecessary; a normal break would do the same thing. The goto inside the inner loop is more tricky to eliminate. Notice that it naturally belongs to the default case of the switch: it does nothing for ( and ). Also notice that the *bufp = *re; bufp++; sequence is only executed if push(id) never happened. With the previous comment in mind, consider default: id = find_id(*re); if (id == INVALID_ID) { *bufp++ = *re; } else { do_the_job; } See how the gotos disappear. And yet again, don't be shy of functions. • Yeah. I completely refactored the code such that everything is within a switch statement, the id system is totally different, and b/c unary operators are unnecessarily processed in the stack, the only operator ever to go in the stack is the binary union '|'. I wish I could change my code as in the original post because when someone looks up 'regex infix to postfix', this comes up, and I don't want people to see such sloppy and bad/ not even fully functional for some cases code, but it is what it is. – user228607 Sep 5 '20 at 0:12
{}
Faculty of Physics M.V.Lomonosov Moscow State University Physics of nuclei and elementary particles ## Nucleon pairing in atomic nuclei ### B.S. Ishkhanov$^{1,2}$, M.E. Stepanov$^{1,2}$, T.Yu. Tretyakova$^2$ Moscow University Physics Bulletin 2014. 69. N 1. P. 1 The nucleon pairing effect that is analyzed in the present paper is one of the striking manifestations of nuclear dynamics. Nucleon pairing for different chains of nuclei dependent upon the number of protons or neutrons in the nucleus allows one to explain the emergence of a great number of positive-parity states, which form a ground state multiplet, in even-even nuclei in the excitation energy range E* < 4 MeV. The interaction of paired nucleons with vibrational and rotational degrees of freedom of a nucleus produces a wide variety of excitation spectra of positive-parity states in even-even nuclei. Show Abstract Theoretical and mathematical physics ## Formation of the distribution function of the aerosol-particle radii for the hydrolysis products of uranium hexafluoride in industrial premises ### S.P. Babenko$^1$, A.V. Badin$^2$ Moscow University Physics Bulletin 2014. 69. N 1. P. 21 We consider $\mathrm{UO_{2}F_{2}}$, and $\mathrm{HF}$ aerosol particles that formed in the air of industrial premises at a factory of the nuclear industry. The distribution function $g_{1}$ of the aerosol-particle radii at a given space-time point is analyzed. Some of the lognormal distribution functions that are related to a gas-dispersed environment of the working premise are considered. The deviation of $g_{1}$ from lognormal distribution functions is estimated. The related problems of calculating the average transmission coefficients of atoms of toxic substances (uranium or fluorine) in the human body during inhalation are discussed. Show Abstract ## Monte Carlo modeling of metallic hydrogen: the phase transition and the equation of state ### A.A. Novoselov$^1$, O.V. Pavlovsky$^2$, M. V. Ulybyshev$^2$ Moscow University Physics Bulletin 2014. 69. N 1. P. 29 We conducted numerical modeling of atomic (metallic) hydrogen using the PIMC (path integral Monte Carlo) method. The temperature and density range in which the electron (proton) behavior is governed by quantum (classical) statistics was studied. The equations of state in the form of dependences of the internal energy and pressure on temperature and density were obtained in that region. These dependences allow one to reveal and study the phase transition between crystal and liquid phases. Show Abstract Physics of nuclei and elementary particles ## Photodisintegration of molybdenum isotopes ### B.S. Ishkhanov$^{1,2}$, I.M. Kapitonov$^1$, A.A. Kuznetsov$^2$, V.N. Orlin$^2$, H.D. Yoon$^1$ Moscow University Physics Bulletin 2014. 69. N 1. P. 37 The process of photodisintegration of molybdenum isotopes was studied using the induced activity method. The yields of isotopes produced as a result of photonuclear reactions on a natural mixture of molybdenum isotopes were determined at an electron accelerator energy of 67.7 MeV. A comparison of the experimental results with the theoretical calculation carried out using the combined model of photonucleon reactions shows that the model gives a fair description of the experimental yields of photonucleon reactions on all molybdenum isotopes except for $^{92}$Mo. The high yields of the proton channels of photonuclear reactions on the $^{92}$Mo isotope and low yields of the corresponding neutron channels are interpreted based on the shell structure of molybdenum isotopes. Show Abstract ## Determination of the numerical value of the gravitational constant in the case of a complicated form of interacting bodies ### V.M. Shakhparonov Moscow University Physics Bulletin 2014. 69. N 1. P. 47 An example of extending the functionality of methods for calculating the gravitational constant for the spherical shape of interacting bodies is presented. The results that were obtained using an apparatus in which the working body is in the form of quartz box are analyzed. A bad choice of the form and material of the working medium for a torsion balance in a vacuum chamber with non-equilibrium flows led to a systematic measurement error. Show Abstract ## Turbulence-induced laser-beam distortions in phase space ### T.I. Arsenyan, N.A. Suhareva, A.P. Sukhorukov Moscow University Physics Bulletin 2014. 69. N 1. P. 55 A consecutive analysis of spatial-temporal disturbances of a laser beam propagating through a turbulent media was carried out. Evolutionary equations for the intensity distributions were obtained for a channel with different types of regular and stochastic spatial dispersion. The relative simplicity and physical validity of the integral relationships allows one to build them in the control algorithm for jam-protection of open-space operating channels. Show Abstract ## Charging potential of dielectrics and insulated conductors as a function of the angle of incidence of an electron beam ### E.N. Evstaf’eva$^1$, S.V. Zaitsev$^1$, E.I. Rau$^{1,2}$, A.A. Tatarintsev $^2$ Moscow University Physics Bulletin 2014. 69. N 1. P. 61 The cardinal characteristics of the charging of dielectric and ungrounded metal targets within radiation by medium-energy electrons (0.5–10 keV) have been studied theoretically and experimentally as a function of the angle of incidence of the electron beam. The coefficients of electron emissions and the second critical energy of primary (radiating) electrons, E$_{2C}$, have been determined as a function of the angle of incidence α when the targets are not being charged. Show Abstract Optics and spectroscopy. Laser physics ## Investigation of the temperature dependence for the spectra of supercooled water in the middle infrared ### A.V. Khakhalin$^{1,2}$, A.V. Koroleva$^1$ Moscow University Physics Bulletin 2014. 69. N 1. P. 66 The temperature dependence of the bending ν$_2$, combination ν$_2$ +ν$_L$, and stretching (ν$_1$, ν$_3$, 2ν$_2$) absorption bands in the infrared spectra of supercooled water with a temperature-change step Δt from 2 to 2.5$^0$C was studied using an advanced infrared Fourier spectrometer. It was found that the frequency of the maximum of the stretching absorption band (2700–3700 cm$^{-1}$) decreases with the reduction of the water temperature from -0.5 to -5.0$^0$C. The frequency of the maximum of the combination absorption band (2130 cm$^{-1}$) increases with the reduction of the water temperature in a range from -3.0 to -5.0$^0$C. The frequency of the maximum of the absorption band of bending oscillation (1640 cm$^{-1}$) is invariable with a reduction of the water temperature from -0.5 to -5.0$^0$C. Show Abstract Condensed matter physics ## The nonlinear optical properties of nanotubes with spiral defects in a longitudinal magnetic field ### V.Ch. Zhukovskii$^1$, V.D. Krevchik$^2$, M.B. Semenov$^2$, A.V. Razumov$^2$ Moscow University Physics Bulletin 2014. 69. N 1. P. 72 It is demonstrated that the anisotropic transfer of photon momentum to an electronic subsystem results in induction of a photon-drag EMF in a standing electromagnetic wave along the axis of a nanotube with a spiral defect, which confirms the assumption found in the literature that the occurrence of such an effect in the presence of an external magnetic field is possible not only in 2-D systems but also in nanotubes with a spiral symmetry. One of the potential mechanisms of inducing the EMF connected with the spatial asymmetry of the electron-phonon interaction in a nanotube with a spiral defect is considered. This mechanism allows for such an EMF to occur upon heating the electron system by the Joule heat of the photon-drag current that flows through the nanotube. Show Abstract ## Investigation of the structural state of potassium polytitanate replaced by iron ### M.S. Krivenkov, A.I. Komiak, A.A. Novakova Moscow University Physics Bulletin 2014. 69. N 1. P. 82 Investigations were carried out for iron ion-substituted potassium polytitanates (PPTs). The mechanism of iron incorporation into the interlayer space of the PPTs, the chemical purity of the polytitanate powders, and the particle morphology were studied. The replacement of potassium ions in the polytitanate structure was experimentally observed and the degree of this substitution was numerically evaluated at different stages of preparation. It was shown that the iron incorporation into the interlayer space of polytitanate was accompanied by the formation of lepidocrocite, which follows from data of Moessbauer spectroscopy. It was found that in the iron ions incorporation process used in the experiment polytitanate particles agglomerate, which may be associated with the formation of lepidocrocite. Show Abstract Chemical physics, physical kinetics, and plasma physics ## Radial inhomogenity of plasma parameters in a low-pressure inductive RF discharge ### E.A. Kralkina, P.A. Nekliudova, V.B. Pavlov, K.V. Vavilin, V.P. Tarakanov Moscow University Physics Bulletin 2014. 69. N 1. P. 86 This work is devoted to systematic investigation into the radial dependence of the plasma parameters of a low-pressure inductive radio-frequency (RF) discharge on pressure within a wide range of 0.8–1 Torr. Experimental results that were obtained under the considered pressures make it possible to analyze the patterns of the changes in plasma parameters upon both a nonlocal mode of discharge and a transition from a nonlocal to local mode of the RF power input. Discharges in helium, neon, argon, and krypton were considered. Experimental data were compared to the results of the numerical simulation of the inductive RF discharge using the particle-in-cell (PIC) method. Show Abstract ## The impact of the Ramsauer effect on the frequency of elastic collisions in inductive RF discharges in inert gases ### E.A. Kralkina, P.A. Nekliudova, V.B. Pavlov, K.V. Vavilin Moscow University Physics Bulletin 2014. 69. N 1. P. 92 This paper presents the results of investigating the power absorption mechanism of an inductive RF discharge plasma. Dependences of the frequency of elastic electron collisions with inert gas atoms (helium, neon, argon, and krypton) on the pressure are given. In the frequency range of •10$^6$–3•10$^7$ s$^{-1}$, an equivalent plasma resistance and the power input into the plasma are determined by the values of collision frequency and electron density within a skin layer and do not depend on the type of gas within the limits of experimental error. Upon reaching the electron temperature of -1 eV, the energy of the main part of electrons lies in the range of Ramsauer’s minimum for elastic cross section. This leads to a decreasing elastic-collision frequency in heavy inert gases as compared to helium. Show Abstract ## Development of nanosecond combined volume discharge with plasma electrodes in an air flow ### N.O. Arkhipov, I.A. Znamenskaya, I.V. Mursenkova, I.Yu. Ostapenko, N.N. Sysoev Moscow University Physics Bulletin 2014. 69. N 1. P. 96 The space-time characteristics of a nanosecond combined volume discharge with preionization from a plasma sheet with a nanosecond duration in air (∼200 ns) are investigated. The integral discharge radiation, radiation spectrum, and discharge current under conditions within the discharge volume, including gas-dynamic flow with a planar shock wave, are analyzed. It is shown that the volume discharge glow is homogeneous in the master phase. The glow in the area of the shock-wave front increases and its duration may be more than 2 μs. Show Abstract
{}
# Core Shell Sphere ## Description: .. _core_shell_sphere: This model provides the form factor, $P(q)$, for a spherical particle with a core-shell structure. The form factor is normalized by the particle volume. For information about polarised and magnetic scattering, see the `magnetism` documentation. Definition The 1D scattering intensity is calculated in the following way (Guinier, 1955) $$P(q) = \frac{\text{scale}}{V} F^2(q) + \text{background}$$ where $$F(q) = \frac{3}{V_s}\left[ V_c(\rho_c-\rho_s)\frac{\sin(qr_c)-qr_c\cos(qr_c)}{(qr_c)^3} + V_s(\rho_s-\rho_\text{solv})\frac{\sin(qr_s)-qr_s\cos(qr_s)}{(qr_s)^3} \right]$$ where $V_s$ is the volume of the whole particle, $V_c$ is the volume of the core, $r_s$ = $radius$ + $thickness$ is the radius of the particle, $r_c$ is the radius of the core, $\rho_c$ is the scattering length density of the core, $\rho_s$ is the scattering length density of the shell, $\rho_\text{solv}$, is the scattering length density of the solvent. The 2D scattering intensity is the same as $P(q)$ above, regardless of the orientation of the $q$ vector. NB: The outer most radius (ie, = radius + thickness) is used as the effective radius for $S(Q)$ when $P(Q) \cdot S(Q)$ is applied. Validation Validation of our code was done by comparing the output of the 1D model to the output of the software provided by NIST (Kline, 2006). Figure 1 shows a comparison of the output of our model and the output of the NIST software. References A Guinier and G Fournet, *Small-Angle Scattering of X-Rays*, John Wiley and Sons, New York, (1955) Authorship and Verification **Author:**
{}
• Advertisement • ### Popular Now • 9 • 9 • 11 • 12 • 9 • Advertisement • Advertisement • Advertisement # Ancient language - World map This topic is 765 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts My first post so hello to all It's not that I can't do it but I know I won't do it efficiently and it really matters this time. I tend to complicate too much in programming areas I'm not very experienced with so I come to ask your help guys. But before I start to present you my question let me explain please a very important thing. I'm creating a program in an ancient BASIC like language called AMOS for AMIGA computers. It's important because I aim to 2 MB of RAM configuration. Yes, two MEGABYTES, not gigabytes . This changes a lot as you surly agree. I have a world map 2400 x 2400 pixels. It's simply too large to load it complete to the memory (apart from this map I have other graphics, a lot of code and also need a room for music and sound samples.)  I decided to divide and split it into smaller sections: I created files called by the following order: world_ne_1a.iff world_ne_1b.iff world_ne_1c.iff ... world_ne_5a.iff world_ne_5b.iff world_ne_5c.iff world_se_1a.iff ... world_se_5c.iff world_nw_1a.iff ... world_nw_5c.iff world_sw_1a.iff ... world_sw_5c.iff Those files are about 40kB each and have resolution 400 x 240 pixels. This is how I'd do it (it might not be the best way but this what came to my mind): Input: A spot on the map using real N/W/E/S coordinates. 1. Convert to pixel coordinates (geo to pixel - already fully functional in my program). 2. Check to which of the four sections of the world it corresponds. For instance, if it's NE section of the globe/map then the program determines on which box 400 x 240 pixels it appears on. 3. Load that file (e.g. world_ne_2b.iff) 4. Copy whole image and store it as blocks 50 x 30 pixels (400/50=8, 240/30=8 - I need those to be divisible by 8 pixes and these values fit perfectly my need) 5. Having X,Y converted data pixel from point 1. center corresponding block (50 x 30 pixel) on the screen (200 x 176 (on my sketch it's 200 x 180 but it should be 200 x 176)) and fill the rest of the visible screen around it with other appropiate blocks of the map. 6. New data input: GoSub 1., GoSub 2., GoTo 7. 7. If new coordinates do not correspond to the actual section displayed on the screen and other map blocks stored in RAM erase those blocks from memory and GoTo point 2. That's the main idea. But there can be a more RAM memory hungry scenario where a spot appears on the map close to the "corner" of a 400 x 240 box. Then the program needs to load two or even four pictures depending on the X,Y axis. But even four images (~160kB) loaded into RAM is not a problem. Now goes my question: How to program it using BASIC like code? How would you do it guys? In a pseudo code or whatever language you prefer (except references to API or DLLs) Thank you in advance and if you need me to clarify some of my description please let me know. Edited by T3mp0 #### Share this post ##### Share on other sites Advertisement So firstly why are you not using a Tilemap? Secondly as you stated AMOS is Ancient.  I did play around with it in like 91 - 92 but, nowadays Ive forgotten it, I couldn't even tell you how to write a Hello World using AMOS.  I don't think you'll get much help on Gamdev but, there is a "Commodore Amiga" group on Facebook.  A bunch of guys on there still develop games in AMOS so if you joined there and asked for help you'd get a better response. #### Share this post ##### Share on other sites I know nothing about AMOS. But can't you just load part of a file in AMOS? If so, then no need to divide your world map into several small ones. Just open the file and read in what's needed. That way you'll avoid having to load 4 minimaps when close to edges. I know this doesn't answer your question but I hope it helps you some. #### Share this post ##### Share on other sites you're talking about implementing terrain chunks. when the world map is too big to fit in ram all at once, you divide it into chunks (squares), and page them off of disk into a cache with a LRU discard algo. and yes, when you approach an edge you need two chunks to draw, and four chunks for corners. you may need even more chunks, depending on chunk size and visual range. chunks in caveman (a fpsrpg) are about the size of visual range, so typically at most four are visible in the viewing frustum. the cache size is 60 or 90 chunks shared between 1 to 50 PCs. i'd like a cache size big enough to store all chunks around the max possible number of PCs (say 9 chunks times 50 players), but it would take up too much ram. you can do a google search and ask questions here about how to design a chunk system. but you'll probably need to goto the amiga sites for help with AMOS basic syntax. Edited by Norman Barrows #### Share this post ##### Share on other sites Thanks Norman, I'll look into that, it's interesting but maybe I'll point you and others to what I think could work - with your help guys of course. @Hexmind I can't load whole map because 2MB of RAM is not enough for it. But I'm starting to think that I could simplify it a bit more that I though. See below. @xbattlestation I know AMOS very well, it's not an issue. I did ask how would you do in BASIC like code but It really doesn't matter much what language it is. Pseudo code can be adapted to practically any language. I know C++ (I created a game in OpenGL) so even if it was in it's pure code (no API nor other stuff like dlls, etc.) that would do too. Yes, AMOS is based on STOS, you're correct. But there is no need to use any plugin because AMOS has sufficient graphics commands built in. I already have a routine that stores blocks and draws them onto the screen but it's just displaying, it doesn't interact with the rest of my code. I keep those blocks as tilemap, as Buster2000 also suggested. But now I need to access them according to the pixel X,Y position. I thought I could store this tilemap in an array, but how to define it and how to access it by given input coordinates? 2400 x 2400 pixels If it was just a simple array, let's call it world[5,9]... --NW-----NE-- world[0,1,2 | 3,4,5] world[1,n,n | n,n,n] world[2,n,n | n,n,n] world[3,n,n | n,n,n] world[4,n,n | n,n,n] --SW--|--SE-- world[5,n,n | n,n,n] world[6,n,n | n,n,n] world[7,n,n | n,n,n] world[8,n,n | n,n,n] world[9,n,n | n,n,n] ...I'd do the following thing (I'm open to any suggestion/correction): The axis "cuts" the world map and divides it horizontally and vertically. First the program needs to determine which of four sections the given coordinates apply to. That's an easy part: N0°-N90° and E0°-E180° corresponds to upper right (NE), N0°-N90° and W0°-W180° is upper left (NW), S0°-S90° and E0°-E180° is lower right section (SE), S0°-S90° and W0°-W180° is lower left (SW). So I'd do a simple check: If coordinate_X <=2 AND coordinate_Y <=4 Then section of the map is NW If coordinate_X >=3 AND coordinate_Y <=4 Then section of the map is NE If coordinate_X <=2 AND coordinate_Y >=5 Then section of the map is SW If coordinate_X >=3 AND coordinate_Y >=5 Then section of the map is SE Let's assume for this example we want to display Sri Lanka island which lays in the NE section 1b zone: Knowing the section and pixel position X,Y from geo2pixel routine the program needs to determine now the image file which is 400 x 240 pixels. That doesn't seem to be complicated either. The center of Sri Lanka is more or less at X=538 adn Y=1152 (remember that each section of the world is 1200 x 1200 pixels). I'd do this checking: If coordinate_X <=400 AND coordinate_Y <=240 Then load image world_ne_5a.iff If coordinate_X >400 AND coordinate_X <=800 AND coordinate_Y <=240 Then load image world_ne_5b.iff If coordinate_X >800 AND coordinate_Y <=240 Then load image world_ne_5c.iff If coordinate_X <=400 AND coordinate_Y >240 AND coordinate_Y <=480 Then load image world_ne_4a.iff If coordinate_X >400 AND coordinate_X <=800 AND coordinate_Y >240 AND coordinate_Y <=480 Then load image world_ne_4b.iff ...and so on until X=538 and Y=1152 fulfill the condition: If coordinate_X >400 AND coordinate_X <=800 AND coordinate_Y >960 Then load image world_ne_1b.iff At this moment I have the section and the image file. Should I create another array? If so it could be map[7,7] (400/8 x 240/8 = 64 blocks 50 x 30). Now some calculations: local_X = X (538) - 400 (1a is out of range so do not count it) = 138 local_Y = Y (1152) - 960 (5b to 2b are out of range, do not count their Y) = 192 New local coordinates are X=138 and Y=192. So far so good. Now let's find the tile: tile_X = Int ((138 / 50) * 0.5) tile_Y = Int ((192 / 30) * 0.5) (tile_X = 3, tile_Y = 7) Now I'd have to center it on the display area 200 x 176 (although the screen size is 320 x 256 I use limited area due to the graphic panel and other data). At a time there can be only 4 or 5 visible horizontal tiles and 6 or 7 vertical (depending on the point at the centered tile): But maybe I don't have to create any tiles but simply center the whole 400x240 image? And in case if some of the map pases onto another section (like in this case of Sri Lanka where I'd need to attach another image from SE section bcause it is too low on the screen) simply load it from another picture(s) and stick them to it in order to fill the area? Sounds like a solution somehow. It wouldn't be pro but I'm starting to think that I'd be OK with it... Any (constructive) thoughts regarding? #### Share this post ##### Share on other sites Just jumping in right here, but why don't you organize your filenames in pixel-base coordinates? In X direction you use chunks of 400 pixels, and in y in chunks of 240 pixels world_x0_y0.iff world_x400_y0.iff .. world_x2000_y2160.iff No idea about your language, but something like xbase = 400 * int(xpos / 400); ybase = 240 * int(ypos / 240); filename = "world_x" + str(xbase) + "_y" + str(ybase) + ".iff" (xpos, ypos) is the pixel coordinate, "int(xpos / 400)" does integer division (rounding down), "str(xbase)" converts number to string. #### Share this post ##### Share on other sites That's why I posted here - to hear suggestions . Thanks for your tip Alberth, seems to be a good idea #### Share this post ##### Share on other sites if the world is 2400x2400, and your images are 400x240, then your obvious choice for chunk size should be 400x240. so your world is 6 chunks wide, by 10 chunks tall. you'll need to be able to convert between world coordinates and chunk number plus coordinates in that chunk. world2chunk: cx = wx / 400 cy = wy / 240 x = wx-cx*6 y = wy-cy*10 chunk2world: wx = cx*400+x wy = cy*240+y where wx,wy are the world coordinates, cx,cy are the chunk number and x,y are the coordinates in the chunk. to draw, you take the camera's world coordinates, and determine the chunk number (cx,cy) and location in the chunk (x,y). the chunk number (cx,cy) tells you which chunks must be drawn: chunk cx,cy, and enough chunks around it to fill the screen. how many chunks radius around it you must draw depends on the current zoom. do a trivial check for entirely offscreen, and draw the rest. you might just use a brute force algo that simply draws all chunks with a trivial offscreen rejection check first. you only have 60 chunks, the rejection test would only be like four greater than or less than checks. so 60 * 4 = 240 greater than or less than checks total - that's nothing on today's PCs.   from the chunk number and location in the chunk, you can determine the x and y scroll offset used for the entire map to center the location on the screen. Edited by Norman Barrows #### Share this post ##### Share on other sites Do you need rendering speed? If you don't then all is good. If you do then the approaches above are not going to work. The approach I used on the Amiga was to scan the map and break it down to unique 8 by 8 tiles. The first thing you notice is that the Earth has a lot of water. A hell of a lot of water, I mean I wouldn't want to try and drink it even if it was beer. So all the areas of the map that are solid water can be broken down into a single 8 by 8 tile. The same basic idea is used to extract all the unique tiles you need to to display the whole map. When I did it I was a bit brutal in the comparison tool and got it down to 187 tiles. So 187 * 8 * 8 = 11968 pixels. A hell of a lot less than 2400 by 2400 Then you need a map. 2400/8 * 2400/8 is 90,000 so you could just use that, but I needed ram for game code as well. So I then ran the code again on the generated map to create macro areas which contain 8 by 8 tiles. Each macro tile is then 64 by 64 pixels, which was perfect for me a large play area in a small amount of bytes. I can't remember how many tiles that generated, it wasn't a huge improvement like the first run, but I think it saved me a couple of K. Anyway, this is only if you need speed. The technique you are currently using will be fine otherwise. #### Share this post ##### Share on other sites • Advertisement
{}
# There are 9 students in a club. Three students are to be chosen to be on the entertainment committee. In how many ways can this group be chosen? Apr 3, 2018 In $84$ ways this group can be chosen . #### Explanation: The number of selections of "r" objects from the given "n" objects is denoted by $n {C}_{r}$ , and is given by nC_r=( n!)/(r!(n-r)!) n=9 , r=3 :. 9C_3=( 9!)/(3!(9-3)!)=(9*8*7)/(3*2)= 84 In $84$ ways this group can be chosen . [Ans]
{}
## amorphous silicon vs crystalline silicon Solar cells are classified by their material: crystal silicon, amorphous silicon, or The current knowledge of the electrochemistry of bulk crystalline silicon powder is summarized in the voltage curve shown in Fig. And no qualitative, Precipitated Silica Powder/Granular for Tires ISO China Supplier, Silicon Dioxide For Painting Coating And Inks, Buy Silica Powder For Adhesives And Sealants, Pharmaceutical/Medicine Grade PPT Silica for Industrial and Agricultural Chemicals/Silica Pesticides Carriers/Silica Medicine Carriers, Hydrochloric acid deposition method of produing silica powder. The predicted value iso calc is based on a fit to the Uptake of amorphous silicon dioxide, in high doses, leads to non-permanent short-term inflammation, where all effects heal. So how do they compare in field trials? Photovoltaic Solar Energy Conference, https://doi.org/10.1007/978-94-009-3817-5_92. Polycrystalline solar panel manufacturers melt multiple silicon fragments together to produce the wafers for these panels. deposits are all possible candidates for crystalline silica exposures. Ross, Jr - JPL, April 30, 1986. 1 and the differential capacity curves shown in Fig. For this reason, they are called “poly” or multi crystalline. This applies particularly to well-documented pneumoconiosis among diatomaceous earth workers. The electrons in each cell will have less space to Crystalline panels do not perform as well in partial shading (compared to Amorphous cells) and they do gradually lose a small percentage of output as the temperature rises above 25°C. Crystalline Solids: Diamond, quartz, silicon, NaCl, ZnS, all metallic elements such as Cu, Zn, Fe etc. © ECSC, EEC, EAEC, Brussels and Luxembourg 1987, Seventh E.C. pp 521-527 | And no qualitative silicon does not exist in this extended lattice structure, the lattice network … Interparticle Forces. In our Solar Secrets book, we explain the advantages of Amorphous panels (thin film low light) compared to the crystalline panels. I compare here the crystalline silicon in red with the amorphous silicon in blue on the left are shown the. Not logged in Light-induced changes in a-Si:H at high illumination; Inverse S-W effect’ Volltrauer, Gau, Kampas, Kiss, Michalski Proceedings of 18th IEE PV Specialists Conference, October 1985. Technological progress and commercial trends are identified and projected, although the emphasis is on products and prices available today for existing applications. To understand this a bit clearer, think of it as spraying the silicon onto the glass in very thin layers. Amorphous silicon enables the fabrication of very high-efficiency crystalline-silicon-based solar cells due to its combination of excellent passivation of the crystalline silicon surface and permeability to electrical charges. The seed is then drawn up and the molten silicon forms around it, creating one crystal. Crystalline silicon Last updated February 15, 2020 Crystalline-silicon solar cells are made of either poly-Si (left) or mono-Si (right)Crystalline silicon (c-Si) is the crystalline forms of silicon, either polycrystalline silicon (poly-Si, consisting of small crystals), or monocrystalline silicon (mono-Si, a continuous crystal). Abundant and efficient, crystalline-silicon solar cells have been around since the 1950s, but thin-film solar cells are the new kids set to become the medium of choice. Amorphous Solids: Amorphous solids have covalently bonded networks. Crystalline silicon (c-Si) is the crystalline forms of silicon, either multicrystalline silicon (multi-Si) consisting of small crystals, or monocrystalline silicon (mono-Si), a continuous crystal. Moreover, this compound has many crystalline forms; we call them polymorphs. Crystalline silicon nitride powder in the form of granular particles having a large tap density and having good filling characteristics is produced by heating amorphous silicon nitride powder under an inert or reducing gas atmosphere. Diffusion of lithium atoms in bulk crystalline silicon (c-Si) and amorphous silicon (a-Si). Silicon is a chemical element with symbol Si and atomic number 14. Quartz crystal is crystalline silica, silicon dioxide crystals and oxygen atoms in the atomic ratio of 1: 2 atomic composition of the three-dimensional network of crystals, with different crystal and color. The reason for this is that the Solar manufacturing industry keeps setting itself up for failure and every manufacturer of the amorphous panels that we wanted to get goes bankrupt. Transitions between amorphous forms of solids and liquids are difficult to study. 78.40.228.128. This structure can be stretched very large, thus forming a stable lattice structure. Amorphous silicon, also known as amorphous silicon, is a kind of silicon allotropes. are examples of crystalline solids. This is the reason behind the higher efficiency of monocrystalline vs. polycrystalline solar panels. Silicon can be easily prepared in amorphous (a-Si), crystalline (c-Si) and monocrystalline state, the mechanical properties of which can be followed up and compared each other. Like conventional solar panels, amorphous solar panels are primarily made up of silicon. Amorphous silicon seems a promising alternative because only a thin layer of it would be required to produce power, about 1 percent to 2 percent of the amount for crystalline silicon solar panels. Amorphous refers to the structure of some non-complete crystalline amorphous areas (amorphous regions) or some amorphous solid (amorphous) composition. Crystalline silicon is usually tetrahedron, and each silicon atom is at the apex of the tetrahedron and is covalently bonded to the other four silicon atoms. 2 . Another advantage of utilizing amorphous silicon thin film over crystalline silicon is that the former absorbs up to 40 times more solar radiation. Amorphous silicon (a-Si) Cadmium telluride (CdTe) Copper indium gallium selenide (CIGS) Dye-sensitized solar cells (DSC) Each of these materials creates a different ‘type’ of solar panel, however, they all fall under the umbrella of The major problem in the assessment of health effects of amorphous silica is its contamination with crystalline silica. There, four oxygen atoms surround a silicon atom. Since amorphous silicon solar cells are sensitive to light with essentially the same wavelengths, they can also be used as visible light sensors. Silicon It is a hard and brittle crystalline solid with a blue-grey metallic lustre; and it is a tetravalent metalloid and semiconductor. The distinct properties of amorphous thermoplastics distinguish them from thermoset elastomers such as Liquid Silicone Rubber (LSR), which are generally softer and more flexible. Several authors have reported that crystalline silicon becomes amorphous upon alloying with lithium. Amorphous silicon lacks the ordering of silicon atoms in the form of a crystalline lattice. To get the benefits of both materials, it’s common for Liquid Silicone Rubber (LSR) to be overmolded onto an amorphous thermoplastic base, or vice-versa. When I announced that we’ll be offering 25w crystalline panels, we received a few emails asking why we’re going to offer crystalline when we say how much better amorphous panels are. Amorphous silicon (a-Si) has been under intense investigation for over a decade for use in low cost However, as soon as one goes to more distant neighbors, due to slight stretching and twisting of bonds, long-range order is lost. It is a hard and brittle crystalline solid with a blue-grey metallic lustre; and it is a tetravalent metalloid and semiconductor. Amorphous silicon, also known as amorphous silicon, is a kind of silicon allotropes. The variation of the energy (red solid line) along the diffusion pathway is plotted against the diffusion length, r, given in units of the 0 0 There is a competitive price advantage of Thin Film modules over Crystalline Silicon PV modules. However, overconcentration the single aspect of its low production cost coupled with insufficient discussion of its other properties have resulted in a widespread incomplete understanding of the material. Spectra were collected on a DXR Raman microscope using a 532 nm excitation laser. Stabisky PV International, Oct 1985. Hydrogenated amorphous silicon deposited on a p-n junction in crystalline silicon causes a two-order-of-magnitude reduction in leakage current compared to the performance of a state-of-the-art thermal oxide passivant. Only careful consideration of both amorphous and crystalline can result in the selection of the better material for the application. Photovoltaic Solar Energy Conference The Polycrystalline, also referred to as multi-crystalline, is a newer technology, and its manufacturing method varies. from crystalline silicon. This long range order is not present in the amorphous forms where there is only short range order between neighbours. Read more about the advantages of amorphous silicon panels. © 2020 Springer Nature Switzerland AG. (Just how durable is amorphous silicon thin film solar material? Intentionally manufactured synthetic amorphous silicas … Previous: Hydrochloric acid deposition method of produing silica powder. The presence of silica is crystalline and amorphous. Both the fine structures of the $$\\hbox {L}_{23}$$ L 23 -edges and their threshold energies have been determined and are compared. Crystalline silicon (c-Si) is the crystalline forms of silicon, either multicrystalline silicon (multi-Si) consisting of small crystals, or monocrystalline silicon (mono-Si), a continuous crystal. Recent studies of crystalline silicon, 1–4 sputtered amorphous silicon, 5 and active/inactive silicon alloy anodes 1 have explained many of the features in the voltage curve of crystalline silicon. By mounting the sample on a motorized stage and amorphous silicon dioxide. Elemental silicon can exist in amorphous and crystalline forms, and in between these two extremes as partially-crystallized silicon. A single polymer molecule may contain millions of small molecules or repeating units which are called monomers.Polymers are very large molecules having high molecular weights. The single crystal is formed using the Czochralski method, in which a ‘seed’ crystal is placed into a vat of molten pure silicon at a high temperature. There are some amorphous forms as well. Crystalline panels need to be as perpendicular to the sun as possible to achieve the best performance. Therefore, if you damage a portion it doesn’t have a large effect on overall power output. The partially-crystallized form is often called polycrystalline silicon, or polysilicon for short. surface and permeability to electrical charges. Amorphous (thin-film) silicon is widely acknowledged as the premier low-cost material of the photovoltaic industry. And no qualitative silicon does not exist in this extended lattice structure, the lattice network between atoms was disordered. The presence of silica is crystalline and, , also known as amorphous silicon, is a kind of silicon allotropes. Rather than drawing on the silicon crystal seed upward as is done for monocrystalline cells, the vat of silicon … 20 21 collected high resolution electron microscopy and X-ray diffraction (XRD) data showing the (Watch this extreme durability test where we punctured one of our panels.) What's the difference between amorphous and crystalline silica? As the cell is constituted of a single crystal, it provides the electrons more space to move for a better electricity flow. This service is more advanced with JavaScript available, Seventh E.C. Crystalline vs Amorphous Solar Panels. one containing predominantly amorphous silicon. An expanding market for both materials will continue, but increasingly amorphous silicon will replace. This is a preview of subscription content, Reversible conductivity changes in discharge-produced amorphous Si’ Staebler and VJronski. Many of the substances that can constitute crystals are amorphous, and the reactivity is generally greater than that of the same material. Crystalline panels do not perform as well in partial shading (compared to Amorphous cells) and they do gradually lose a small percentage of output as the temperature rises above 25°C. Crystalline panels need to be as perpendicular to the sun as possible to achieve the best performance. So, thin-film does not necessarily have competitive price advantage over crystalline. 1). In fact, the coating only has to be 0.000 039 37 inch, or one micrometer in thickness. The warranty for a typical thin film module is special (pun intended). Download preview PDF. Polycrystalline panels start as a silicon crystal' seed' put in a vat of molten silicon. Distribution of Amorphous and Crystalline Silicon Raman mapping is an excellent way to obtain information about potential variations in crystallinity over areas of deposited silicon. 2015-05-19 admin 42 Comments. Amorphous and nano-crystalline silicon lack this long-range order, however. Materials 2018, 11, 1646 3 of 16 Table 1. The paper concludes that: These keywords were added by machine and not by the authors. Over 10 million scientific documents at your fingertips. Consider this question. Part of Springer Nature. Rather than drawing on the Flat-Plate Solar Array Project - Long-term stability of a-Si Modules’ R.G. In the crystalline forms the tetrahedra are organized relative to each other in a definite regular long range order in which both the silicon and the oxygen atoms have defined positions. Despite the fact that the global thin film module production capacity have increased significantly since 2007, the price of crystalline silicon modules have sharply decreased. The silicon dioxide molecules show a tetrahedral geometry. amorphous and crystalline silicon against primary ion energy. Crystalline silicon is usually tetrahedron, and each silicon atom is at the apex of the tetrahedron and is covalently bonded to the other four silicon atoms. Figure 1. Amorphous Silicon Layers on Crystalline Silicon Patrick Thoma,* Evelyn Tina Breyer, Oana-Maria Thoma, Georgeta Salvan, and Dietrich R. T. Zahn 1. a-Si films with the thickness of some hundreds nm can be prepared by ion-implantation while by deposition techniques without any limitation in thickness. Amorphous silicon material inherently has more tolerance for defects than crystalline. We are looking for drift velocity vs electric field of electrons and holes for finding the drift velocity saturation in the amorphous and micro-crystalline silicon. are examples of amorphous solids. Polycrystalline panels start as a silicon crystal' seed' put in a vat of molten silicon. The terms crystalline and amorphous silica apply to the chemical silicon dioxide (SiO2). Not affiliated アモルファスシリコン(英: amorphous silicon )は、ケイ素を主体とする非晶質半導体である。 結晶シリコンと比較してエネルギーギャップが大きく、吸光係数が高い、製膜が容易などの特徴を持ち、薄膜トランジスタや太陽電池などに応用される。 In amorphous silicon (a-Si) almost every Si atom is tetrahedrally bonded to four nearest neighbor Si atoms—just as in crystalline Si (c-Si). Machine learning has now provided fresh insight into pressure-induced transformations of amorphous silicon, opening the way to studies of other systems. If one of our standard configurations isn’t what you are looking for, no problem! Amorphous silicon appears to be an attractive material for making two dimensional, position sensitive x-ray and particle detectors. I of V crystallizes silicon as a bound gap which allows to … Effects of Temperature and Moisture on Module Leakage Currents’ Mon, Wen, Ross, Adent Proceedings of 18th IEE Specialists Conference, October 1985. The panel derives its name “mono” because it uses single-crystal silicon. Carrier collection becomes a non-trivial issue. Now, about the warranty. There is a huge difference between amorphous silica and crystalline silica as far as your health and your concrete is concerned. Amorphous (thin-film) silicon is widely acknowledged as the premier low-cost material of the photovoltaic industry. Amorphous Solids: Glass, organic polymers etc. Status Summary on the Light-Induced Effect in a-Si Material and Solar Cells’ E.S. High-pressure phases of amorphous and crystalline silicon Murat Durandurdu Department of Materials Science and Engineering, University of Michigan, Ann Arbor, Michigan 48109 D. … This structure can be stretched very large, thus forming a stable lattice structure. Next: What affects stability of nano silica suspendcion? Amorphous Silicon has a quite significantly different spectral response to crystalline silicon, with a greater response to low wavelength light. The difference between the two is at the atomic level. As you probably know, thin-film, especially amorphous silicon thin-film, suffers from severe long-term degradation. Amorphous/crystalline silicon heterojunction solar cells via remote inductively coupled plasma processing. Christophe Ballif, Stefaan De Wolf, Antoine Descoeudres, Zachary C. Holman, Amorphous Silicon/Crystalline Silicon Heterojunction Solar Cells, Advances in Photovoltaics: Part 3, 10.1016/B978-0-12-388417-6.00003-9, (73-120), The optical properties of’amorphous and crystalline silicon 471 reliable values for the absorption index when this is less than 0.4 but for this and smaller values of k the refractive index may be obtained with good accuracy from the reflectance assuming k = 0. Crystalline silica (quartz) is the form of silica that OSHA is writing the new regulations to cover, it is a health hazard. Each of the individual solar cells contain a silicon wafer that is made of a single crystal of silicon. Applied Physics Letters, 100(23), 233902‑233902‑4. This structure can be stretched very large, thus forming a stable lattice structure. Apart from that, we can convert silicon dioxide into silicon via a reduction reaction with carbon. 2) in which atomic arrangements are regular, amorphous silicon features irregular atomic arrangements (Fig. The spectra show the sharp band at 521 cm-1 from crystalline silicon and the much broader band centered at approximately 480 cm-1 from the amorphous silicon. The current knowledge of the electrochemistry of bulk crystalline silicon powder is summarized in the voltage curve shown in Fig. Crystalline silicon is usually tetrahedron, and each silicon atom is at the apex of the tetrahedron and is covalently bonded to the other four silicon atoms. However, overconcentration the single aspect of its low production cost coupled with insufficient discussion of its other properties have resulted in a widespread incomplete understanding of the material. That being the case, only a very thin film coating is necessary to absorb 90 percent or more of direct sunlight. Explanation of the device operation principle of amorphous silicon/ crystalline silicon heterojunction solar cell and role of the inversion of crystalline silicon surface Kunal Ghosh, Clarence J. Tracy, Stanislau Herasimenka, Christiana Key Difference – Amorphous vs Crystalline Polymers The word “polymer” can be defined as a material made out of a large number of repeating units which are linked to each other through chemical bonding. One operator has a 120w monocrystalline or polycrystalline panel out in the field on a partly cloudy day. Recent studies of crystalline silicon, 1–4 sputtered amorphous silicon, 5 and active/inactive silicon alloy anodes 1 have explained many of the features in the voltage curve of crystalline silicon. Unlike crystal silicon (Fig. However, though built with the same material, they are constructed in a different way: instead of using solid silicon wafers (like you do with mono- or poly-crystalline solar panels ), manufacturers make amorphous panels by depositing non-crystalline silicon on a substrate of glass, plastic, or metal. Within this context, it is worthwhile to compare the most obvious characteristics of single crystal silicon and amorphous silicon dioxide, i.e., quartz glass: 1) silicon is crystalline, quartz glass is amorphous, 2) silicon conducts heat and While crystalline silicon FET's are the key enablers for the integrated circuit field, amorphous silicon thin film transistors are the key semiconductor of … The Polycrystalline, also referred to as multi-crystalline, is a newer technology, and its manufacturing method varies. Amorphous solar panels contain no cells per say but are created rather through a deposition process which actually forms the silicon material directly on the glass substrate. Other materials that As a result, the reciprocal action between photons and silicon atoms occurs more frequently in amorphous silicon than in crystal silicon… Crystalline silica exists in several different mineral forms, including quartz, cristobalite, and tridymite. Uptake of amorphous silicon dioxide, in high doses, leads to non-permanent short-term inflammation, where all effects heal. During the initial lithiation process, crystalline silicon and lithium react at room temperature, forming an amorphous phase of lithiated silicon.7,23−32 First-principles calculations have revealed many atomic This process is experimental and the keywords may be updated as the learning algorithm improves. A distinction between amorphous and crystalline silicon by means of the silicon $$\\hbox {L}_{23}$$ L 23 -edges acquired by electron energy-loss spectroscopy is presented. Amorphous silicon (a-Si) is the non-crystalline form of silicon used for solar cells and thin-film transistors in LCDs. Another operator has a 60 watt amorphous panel out in the field at the same location. To solve this, engineers created a p-i-n structure. Silicon. 19 20 21 Limthongkul et al. Cite as, Incomplete understanding and not sound technical judgments are retarding acceptance of a-Si material into the pv marketplace. Precipitated Silica For Feed Additives As VE Carrier, Hot Sale High Quality Chamiical Of Amorphous Silica For Feed, Precipitated Silica (bead) For Feed Auxiliary Raw Material. Jinsha Precipitated Silica Manufacturing Co., Ltd. Email: jk@jksilica.com,sally@jksilica.com, Add: Gaosha Industrial Zone, Shaxian, Fujian, China. This paper compares crystalline and amorphous silicon on a characteristic-by-characteristic basis to present the alternatives objectively and in a complete context. We offer a wide range of amorphous silicon and crystalline silicon solutions. Custom solutions are our focus and we are dedicated to creating the best possible panel by selecting the ideal charge controller, encapsulation and substrate that’s best for your application, specific use case and operating … Consequently, due to this characteristic, no semiconductor property would be expected from this material. What affects stability of nano silica suspendcion. Crystalline silicon is the dominant semiconducting material used in photovoltaic technology for the … We have measured C‐V characteristics and temperature dependence of J‐V characteristics of undoped hydrogenated amorphous silicon (a‐Si:H) heterojunctions formed on p‐type crystalline silicon ( p c‐Si) substrates with different resistivities. Experimental 29Si NMR chemical shifts, iso, computed absolute shifts, ˙iso, and predicted iso of Si sites in a variety of crystalline silicon nitrides. Unable to display preview. Copyright ©Jinsha Precipitated Silica Manufacturing Co., Ltd. All rights reserved. Figure 2 illustrates the results of a comparative study between a-Si and c-Si on a cloudy day. Yes, there is a little difference in density between amorphous and crystalline phases of the same materials, Usually the density of crystalline is larger material amorphous. 2 ) in which atomic arrangements ( amorphous silicon vs crystalline silicon or more of direct sunlight, is a hard brittle..., it provides the electrons more space to move for a typical thin film crystalline! Of direct sunlight widely acknowledged as the premier low-cost material of the photovoltaic.. Times more solar radiation material and solar cells and thin-film transistors in LCDs the seed then... Thin-Film transistors in LCDs crystalline forms ; we call them polymorphs, and in between these extremes... Explain the advantages of amorphous silicon thin-film, suffers from severe long-term degradation and VJronski is! Produing silica powder of amorphous silicon )は、ケイ素を主体とする非晶質半導体である。 結晶シリコンと比較してエネルギーギャップが大きく、吸光係数が高い、製膜が容易などの特徴を持ち、薄膜トランジスタや太陽電池などに応用される。 from crystalline silicon in red with the amorphous silicon thin-film, amorphous... Behind the higher efficiency of monocrystalline vs. polycrystalline solar panels. the photovoltaic.! Are amorphous, and tridymite are difficult to study a-Si and c-Si on partly... The alternatives objectively and in a vat of molten silicon well-documented pneumoconiosis among diatomaceous earth.... Overall power output, Reversible conductivity changes in discharge-produced amorphous Si ’ and. ( SiO2 ) uptake of amorphous silicon lacks the ordering of silicon engineers a. Si ’ Staebler and VJronski as spraying the silicon onto the glass in thin. Panels need to be as perpendicular to the sun as possible to achieve the best performance materials will continue but. A vat of molten silicon reason, they are called “ poly ” or multi.... Configurations isn ’ t have a large effect on overall power output silicon PV modules film low )... Thin-Film, especially amorphous silicon features irregular atomic arrangements are regular, amorphous dioxide. Regions ) or some amorphous solid ( amorphous regions ) or some amorphous solid ( )! As perpendicular to the crystalline panels. atoms was disordered into silicon via a reduction reaction with carbon ECSC EEC! Field on a characteristic-by-characteristic basis to present the alternatives objectively and in complete... And its manufacturing method varies utilizing amorphous silicon amorphous silicon vs crystalline silicon also known as amorphous silicon features atomic... Drawing on the left are shown the earth workers not present in the amorphous forms of Solids liquids! Products and prices available today for existing applications amorphous Si ’ Staebler and VJronski flat-plate solar Array -. Overall power output ( Watch this extreme durability test where we punctured one of panels... Of silicon these keywords were added by machine and not by the authors of silicon expected from material... Wavelength light becomes amorphous upon alloying with lithium there, four oxygen atoms surround a silicon '. Crystal, it provides the electrons more space to move for a better electricity flow, April 30,.! Fact, the lattice network between atoms was disordered drawing on the several authors have reported that crystalline silicon,! Difficult to study understand this a bit clearer, think of it as spraying the silicon onto glass! Effects heal, due to this characteristic, no problem areas ( amorphous ) composition polycrystalline start! Single-Crystal silicon “ mono ” because it uses single-crystal silicon amorphous silicon vs crystalline silicon amorphous silicon, amorphous,. Reaction with carbon, all metallic elements such as Cu, Zn, Fe etc with JavaScript,. Than drawing on the left are shown the coating is necessary to absorb 90 percent more. Newer technology, and tridymite used for solar cells and thin-film transistors in LCDs cells ’ E.S the is. © ECSC, EEC, EAEC, Brussels and Luxembourg 1987, Seventh E.C of. Field on a cloudy day, or one micrometer in thickness what 's the difference between forms! Dioxide ( SiO2 ) ) and amorphous silicon will replace that: these keywords were added by machine and by... Silica suspendcion a quite significantly different spectral response to crystalline silicon ( Fig this long-range order, however element symbol... The case, only a very thin layers of nano silica suspendcion where there is a tetravalent metalloid semiconductor! Regular, amorphous solar panels, amorphous solar panels, amorphous silicon, also referred to as multi-crystalline is! For solar cells ’ E.S a very thin layers a kind of silicon used for solar and... Electricity flow doesn ’ t have a large effect on overall power output Cu, Zn, etc... They are called “ poly ” or multi crystalline “ poly ” or multi crystalline: acid... Of thin film modules over crystalline its name “ mono ” because it uses single-crystal.... Range order is not present in the selection of the electrochemistry of bulk crystalline silicon is that former. Emphasis is on products and prices available today for existing applications crystalline ;. Is at the same amorphous silicon vs crystalline silicon only a very thin film coating is necessary to absorb 90 percent or of... Selection of the better material for the application polycrystalline panels start as a atom. Where we punctured one of our panels. a preview of subscription content, Reversible conductivity changes in discharge-produced Si. That of the photovoltaic industry as you probably know, thin-film does not exist in this extended lattice structure it. ; we call them polymorphs presence of silica is crystalline and amorphous silicon, is a newer,. These keywords were added by machine and not by the authors a 532 nm excitation laser 60 watt panel! The higher efficiency of monocrystalline vs. polycrystalline solar panels, amorphous solar.! It is a tetravalent amorphous silicon vs crystalline silicon and semiconductor inch, or Unlike crystal silicon ( a-Si ) is the form., is a chemical element with symbol Si and atomic number 14 elements such as Cu,,! Over crystalline lithium atoms in the field on a partly cloudy day, they are called “ ”! The partially-crystallized form is often called polycrystalline silicon, or one micrometer in thickness as possible to achieve best! Very thin layers, thus forming a stable lattice structure all effects heal Si and atomic number.! Crystal, it provides the electrons more space to move for a better flow! The photovoltaic industry solar panels. or one micrometer in thickness “ poly ” or crystalline! And your concrete is concerned illustrates the results of a crystalline lattice has provided... From crystalline silicon is a kind of silicon allotropes differential capacity curves shown in Fig book, we convert! Such as Cu, Zn, Fe etc from severe long-term degradation defects than.! To low wavelength light expected from this material convert silicon dioxide into via... A better electricity flow of molten silicon forms around it, creating one crystal from long-term! A crystalline lattice and crystalline forms, and tridymite red with the thickness some... The better material for the application can convert silicon dioxide, in high doses, to. Stability of nano silica suspendcion in this extended lattice structure in Fig also to. A hard and brittle crystalline solid with a greater response to crystalline silicon ( a-Si ) semiconductor would! That of the better material for the application partially-crystallized silicon so, thin-film does not exist in extended! Rather than drawing on the several authors have reported that crystalline silicon powder is summarized in form! Amorphous ( thin-film ) silicon is widely acknowledged as the cell is constituted of a lattice! Among diatomaceous earth workers bit clearer, think of it as spraying the silicon onto glass! The advantages of amorphous silicon, also known as amorphous silicon in blue on the are. Results of a single crystal, it provides the electrons more space to move for a better flow..., 1986 a preview of subscription content, Reversible conductivity changes in discharge-produced amorphous Si Staebler... Nm excitation laser large effect on overall power output very large, thus forming a stable lattice structure the... Hydrochloric acid deposition method of produing silica powder Solids have covalently bonded networks 120w or! #### Related amorphous silicon vs crystalline silicon 2021
{}
# 18 is divisible by both 2 and 3 . It is also divisible by $2 \times 3=6$. Similarly, a number is divisible by both 4 and 6. Can we say that the number must also be divisible by $4 \times 6=24$ ? If not, give an example to justify your answer. AcademicMathematicsNCERTClass 6 #### Complete Python Prime Pack 9 Courses     2 eBooks #### Artificial Intelligence & Machine Learning Prime Pack 6 Courses     1 eBooks #### Java Prime Pack 9 Courses     2 eBooks Given: 18 is divisible by both 2 and 3 . It is also divisible by $2 \times 3=6$. Similarly, a number is divisible by both 4 and 6. To do : We have to find whether the number is be divisible by $4 \times 6=24$. Solution : It is not necessary that the number is be divisible by $4 \times 6=24$. Example: 12 and 36 are both divisible by 4 and 6. 12 and 36 are not divisible by 24. Updated on 10-Oct-2022 13:30:36 Advertisements
{}
## Recommended Posts SelethD    456 I am working on a project, that uses some legacy graphics, that are in an 8bit indexed bitmaps.  These are the ones with a 1 byte pixel, pointing to an array of 256 colors. So far, this is how I have been doing things.... (note, I'm using C#, with OpenTK, although a solution in C++ can be easily converted) Bitmap bitmap = new Bitmap(path); System.Drawing.Imaging.BitmapData textureData = bitmap.LockBits( new Rectangle(0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.PixelFormat.Format8bppIndexed); int srcmax = textureData.Height * textureData.Stride; byte[] origBytes = new Byte[srcmax]; Marshal.Copy(textureData.Scan0, origBytes, 0, srcmax); bitmap.UnlockBits(textureData); this gives me the bitmap pixel data (the 1 byte index numbers) in textureData. So now, I very slowly and very painfully, go through the data, as such.... //loop through all the pixels byte[] data = new byte[(bitmap.Width * bitmap.Height) * 4]; int srcloc = 0; int dstloc = 0; byte index; Color color; for (int y = 0; y < textureData.Height; y++) { srcloc = y * textureData.Stride; for (int x = 0; x < textureData.Width; x++) { index = origBytes[srcloc]; if (index == 0) { data[dstloc] = 0; data[dstloc + 1] = 0; data[dstloc + 2] = 0; data[dstloc + 3] = 0; } else { color = bitmap.Palette.Entries[index]; data[dstloc] = color.R; data[dstloc + 1] = color.G; data[dstloc + 2] = color.B; data[dstloc + 3] = color.A; } dstloc += 4; srcloc++; } } as you can see, I am looking at the color, pointed to, by each pixel index... and manually reversing the red, and blue values, and also, setting alpha to 0, when the index number is 0.  This converted data is then used to create an opengl texture, as follows... GL.Enable(EnableCap.Texture2D); uint textureID = 0; GL.GenTextures(1, out textureID); GL.BindTexture(TextureTarget.Texture2D, ret.ID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Clamp); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Clamp); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Nearest); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Nearest); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, width, height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixelData); GL.Disable(EnableCap.Texture2D); Now... this totally works, I end up with a rgba opengl texture2d, with transparency that can be used to texture a quad  and it looks perfect.... only problem is the super slow speed of going through each pixel, as some of these images are > 800x800 So... If anyone knows of a better solution, something faster, or if you know of anything I seem to be doing wrong (opengl wise, as I am an opengl noob) please let me know. Thanks ##### Share on other sites haegarr    7372 Are you sure it is "super slow"? However, some thoughts: 1.) One thing is to avoid the in-loop case distinction. If Palette.Entries[0] would store {0,0,0,0} then index ==0 would be handled as any other index, and the if-clause could be eliminated. 2.) Another thing is perhaps the possibility to reduce the 4x1-byte writes to 1x4-bytes write, but I'm not sure; perhaps modern hardware / compilers can hide that anyway. Maybe somebody else can comment on this attempt ... 3.) If this step is in time critical code, then move it out into a pre-processing step if possible. ##### Share on other sites SelethD    456 Thanks so much for your input. Well, the case with Palette.Entries[0], is that it's alpha value is non 0, and I really don't care what color the pixels are for index 0, as long as it is transparent.  I cant edit the original images, unless there is some magical opengl, trick to mass convert data or something, I'm afraid I'll have to check and set to alpha 0 each time I hit an index 0 byte. I see what you mean, as far as writing 1x4 bytes, not sure if this would be a speed increase, but its something I will try, just to see if I get different results. Now you mention, am I sure its 'super slow'... yeah, it kinda is... and actually that is a bit surprising, even for an 800x800 image... I would think it would be faster on a modern computer.  So, I'm wondering if the part that is taking up so much time, might be the part of the code where I am reading in the Bitmap, with LockBits, and copying its data. I plan to keep trying and experimenting If anyone has any other input or have used 8bit indexed bitmaps.... please share some secrets, thanks. ##### Share on other sites SelethD    456 It is definately the 'looping' through the indexes that is causing the slow operation. I have been searching and searching online for a solution, and came across something interesting... I've seen GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bitmap.Width, bitmap.Height, 0, PixelFormat.ColorIndex, PixelType.UnsignedByte, data); now, the PixelFormat is ColorIndex.... this is what I need... however, when I use this, I get pure black images, no transparency... So, I think I must be missing the 'color palette' somewhere... I'm trying to discover, how to let opengl know the data of the colorpalette tha the ColorIndex will look at. "Similarly, to offset the color indices of a bitmap to the palette entries you have defined for it, use glPixelTransferi(GL_INDEX_OFFSET, bitmap_entry);" it says 'the palette entries you have defined', and im assuming this is 'bitmap_entry', but nothing shows 'HOW' to define the palette entries So... I'm feeling more lost than ever, surely surely someone has used indexed graphics, and knows how to do this, but the lack of information on the web is troubling, i'm thinking, is it even possible anymore? Thanks for any help ##### Share on other sites Sponji    2503 I think this line makes your code slow: color = bitmap.Palette.Entries[index]; Because this says: "This property returns a copy of the ColorPalette object used by this Image." Maybe just grab the palette/entries first so you can access them directly? ##### Share on other sites haegarr    7372 Well, the case with Palette.Entries[0], is that it's alpha value is non 0, and I really don't care what color the pixels are for index 0, as long as it is transparent.  I cant edit the original images, unless there is some magical opengl, trick to mass convert data or something, I'm afraid I'll have to check and set to alpha 0 each time I hit an index 0 byte. Make a copy of the palette in an own byte array and set the entry at index 0 accordingly. That has nothing to do with OpenGL. now, the PixelFormat is ColorIndex.... this is what I need... however, when I use this, I get pure black images, no transparency... So, I think I must be missing the 'color palette' somewhere... I'm trying to discover, how to let opengl know the data of the colorpalette tha the ColorIndex will look at. Color index mode has been deprecated in 2009 with OpenGL 3.0. You should not use it any more. Because this says: "This property returns a copy of the ColorPalette object used by this Image." Oh yes, a single access would be better then. ##### Share on other sites SelethD    456 Its too bad there is no apparent 'native support' for 8bit images in opengl anymore.  However, I did not know it was getting a copy of the entire palette on each call, I tried to suggestion of making one copy to an array, and using the array, and the results were 100% faster. I cant even tell how long its taking to load, as it zips through the files. Thanks so much. ## Create an account Register a new account • ### Similar Content • By cebugdev hi all, i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only), i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse. now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about. 1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection? 2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension. 3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question). lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free, Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework. IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work. thank you, and looking forward to positive replies. • I have a few beginner questions about tesselation that I really have no clue. The opengl wiki doesn't seem to talk anything about the details. What is the relationship between TCS layout out and TES layout in? How does the tesselator know how control points are organized? e.g. If TES input requests triangles, but TCS can output N vertices. What happens in this case? http://www.informit.com/articles/article.aspx?p=2120983 the isoline example TCS out=4, but TES in=isoline. And gl_TessCoord is only a single one. So which ones are the control points? How are tesselator building primitives? • By Orella I've been developing a 2D Engine using SFML + ImGui. Here you can see an image The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor. Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 3D Editor preview But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor. If you can provide code will be better. And if you want me to provide any specific code tell me. Thanks! • Hi I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image. I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good. Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right? I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand. I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it. I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those? Are there solutions anywhere? Thank you for your time. Sorry if this is a bit confusing, English isn't my native language • FINALLY, upgrading my engine to openGL 4. I was having some trouble so I started with a stripped down application and was wondering if VAO's are required, because I have a sample working, but if I remove the VAO then it doesn't seem to like drawing my triangle. • 29 • 13 • 9 • 15 • 9
{}
A one-dimensional row of positive ions, each with charge + Q and separated from its neighbors by a distance d, occupies the right-hand half of the x axis. That is, there is a + Q charge at x = 0, x = + d, x = + 2d, x = + 3d, and so on out to infinity. a.)If an electron is placed at the position x = - d, determine F, the magnitude of force that this row of charges exerts on the electron. b.)If the electron is instead placed at x = - 3d , what is the value of F? [Hint: The infinite sum as n goes from 1 to infinity of 1/n^2 is pi^2/6.
{}
This blog describes a straightforward method to significantly reduce the number of necessary multiplies per input sample of traditional IIR lowpass and highpass digital filters. Reducing IIR Filter Computations Using Dual-Path Allpass Filters We can improve the computational speed of a lowpass or highpass IIR filter by converting that filter into a dual-path filter consisting of allpass filters as shown in Figure 1. ... A Lesson In Engineering Humility Let's assume you were given the task to design and build the 12-channel telephone transmission system shown in Figure 1. Figure 1 At a rate of 8000 samples/second, each telephone's audio signal is sampled and converted to a 7-bit binary sequence of pulses. The analog signals at Figure 1's nodes A, B, and C are presented in Figure 2. Figure 2 I'm convinced that some of you subscribers to this dsprelated.com web site could accomplish such a design & build task.... In an earlier post [1], we implemented lowpass IIR filters using a cascade of second-order IIR filters, or biquads. This post provides a Matlab function to do the same for Butterworth bandpass IIR filters.  Compared to conventional implementations, bandpass filters based on biquads are less sensitive to coefficient quantization [2].  This becomes important when designing narrowband filters. A biquad section block diagram using the Direct Form II structure [3,4] is... Controlling a DSP Network's Gain: A Note For DSP Beginners This blog briefly discusses a topic well-known to experienced DSP practitioners but may not be so well-known to DSP beginners. The topic is the proper way to control a digital network's gain. Digital Network Gain Control Figure 1 shows a collection of networks I've seen, in the literature of DSP, where strict gain control is implemented. FIGURE 1. Examples of digital networks whose initial operations are input signal... Generating Partially Correlated Random Variables IntroductionIt is often useful to be able to generate two or more signals with specific cross-correlations. Or, more generally, we would like to specify an $\left(N \times N\right)$ covariance matrix, $\mathbf{R}_{xx}$, and generate $N$ signals which will produce this covariance matrix. There are many applications in which this technique is useful. I discovered a version of this method while analysing radar systems, but the same approach can be used in a very wide range of... Free Goodies from Embedded World - Full Inventory and Upcoming Draw Live-Streaming Date March 22, 20191 comment Chances are that you already know that I went to Embedded World a few weeks ago and came back with a bag full of "goodies".  Initially, my vision was to do a single draw for one person to win it all, but I didn't expect to come back with so much stuff and so many development kits.   Based on your feedback, it seems like you guys agree that It wouldn't make sense for one person to win everything as no-one could make good use of all the boards and there would be lots of... Angle Addition Formulas from Euler's Formula Introduction This is an article to hopefully give a better understanding of the Discrete Fourier Transform (DFT), but only indirectly. The main intent is to get someone who is uncomfortable with complex numbers a little more used to them and relate them back to already known Trigonometric relationships done in Real values. It is essentially a followup to my first blog article "The Exponential Nature of the Complex Unit Circle". Polar Coordinates The more common way of... Demonstrating the Periodic Spectrum of a Sampled Signal Using the DFT One of the basic DSP principles states that a sampled time signal has a periodic spectrum with period equal to the sample rate.  The derivation of can be found in textbooks [1,2].  You can also demonstrate this principle numerically using the Discrete Fourier Transform (DFT). The DFT of the sampled signal x(n) is defined as: $$X(k)=\sum_{n=0}^{N-1}x(n)e^{-j2\pi kn/N} \qquad (1)$$ Where X(k) = discrete frequency spectrum of time sequence x(n) Free Goodies from Embedded World - What to Do Next? I told you I would go on a hunt for free stuff at Embedded World in order to build a bundle for someone to win. Back from Embedded World 2019 - Funny Stories and Live-Streaming Woes March 1, 20191 comment When the idea of live-streaming parts of Embedded World came to me,  I got so excited that I knew I had to make it happen.  I perceived the opportunity as a win-win-win-win. • win #1 - Engineers who could not make it to Embedded World would be able to sample the huge event, • win #2 - The organisation behind EW would benefit from the extra exposure • win #3 - Lecturers and vendors who would be live-streamed would reach a (much) larger audience • win #4 - I would get... Round Round Get Around: Why Fixed-Point Right-Shifts Are Just Fine Today’s topic is rounding in embedded systems, or more specifically, why you don’t need to worry about it in many cases. One of the issues faced in computer arithmetic is that exact arithmetic requires an ever-increasing bit length to avoid overflow. Adding or subtracting two 16-bit integers produces a 17-bit result; multiplying two 16-bit integers produces a 32-bit result. In fixed-point arithmetic we typically multiply and shift right; for example, if we wanted to multiply some... Time Machine, Anyone? Abstract: Dispersive linear systems with negative group delay have caused much confusion in the past. Some claim that they violate causality, others that they are the cause of superluminal tunneling. Can we really receive messages before they are sent? This article aims at pouring oil in the fire and causing yet more confusion :-). Introduction Simplest Calculation of Half-band Filter Coefficients Half-band filters are lowpass FIR filters with cut-off frequency of one-quarter of sampling frequency fs and odd symmetry about fs/4  [1]*.  And it so happens that almost half of the coefficients are zero.  The passband and stopband bandwiths are equal, making these filters useful for decimation-by-2 and interpolation-by-2.  Since the zero coefficients make them computationally efficient, these filters are ubiquitous in DSP systems. Here we will compute half-band... FFT Interpolation Based on FFT Samples: A Detective Story With a Surprise Ending This blog presents several interesting things I recently learned regarding the estimation of a spectral value located at a frequency lying between previously computed FFT spectral samples. My curiosity about this FFT interpolation process was triggered by reading a spectrum analysis paper written by three astronomers [1]. My fixation on one equation in that paper led to the creation of this blog. Background The notion of FFT interpolation is straightforward to describe. That is, for example,... Optimizing the Half-band Filters in Multistage Decimation and Interpolation This blog discusses a not so well-known rule regarding the filtering in multistage decimation and interpolation by an integer power of two. I'm referring to sample rate change systems using half-band lowpass filters (LPFs) as shown in Figure 1. Here's the story. Figure 1: Multistage decimation and interpolation using half-band filters. Multistage Decimation – A Very Brief Review Figure 2(a) depicts the process of decimation by an integer factor D. That... Accurate Measurement of a Sinusoid's Peak Amplitude Based on FFT Data There are two code snippets associated with this blog post: Flat-Top Windowing Function for the Accurate Measurement of a Sinusoid's Peak Amplitude Based on FFT Data and Testing the Flat-Top Windowing Function This blog discusses an accurate method of estimating time-domain sinewave peak amplitudes based on fast Fourier transform (FFT) data. Such an operation sounds simple, but the scalloping loss characteristic of FFTs complicates the process. We eliminate that complication by... An s-Plane to z-Plane Mapping Example While surfing around the Internet recently I encountered the 's-plane to z-plane mapping' diagram shown in Figure 1. At first I thought the diagram was neat because it's a good example of the old English idiom: "A picture is worth a thousand words." However, as I continued to look at Figure 1 I began to detect what I believe are errors in the diagram. Reader, please take a few moments to see if you detect any errors in Figure 1. ... Computing the Group Delay of a Filter I just learned a new method (new to me at least) for computing the group delay of digital filters. In the event this process turns out to be interesting to my readers, this blog describes the method. Let's start with a bit of algebra so that you'll know I'm not making all of this up. Assume we have the N-sample h(n) impulse response of a digital filter, with n being our time-domain index, and that we represent the filter's discrete-time Fourier transform (DTFT), H(ω), in polar form... The Number 9, Not So Magic After All This blog is not about signal processing. Rather, it discusses an interesting topic in number theory, the magic of the number 9. As such, this blog is for people who are charmed by the behavior and properties of numbers. For decades I've thought the number 9 had tricky, almost magical, qualities. Many people feel the same way. I have a book on number theory, whose chapter 8 is titled "Digits — and the Magic of 9", that discusses all sorts of interesting mathematical characteristics of the... The Most Interesting FIR Filter Equation in the World: Why FIR Filters Can Be Linear Phase This blog discusses a little-known filter characteristic that enables real- and complex-coefficient tapped-delay line FIR filters to exhibit linear phase behavior. That is, this blog answers the question: What is the constraint on real- and complex-valued FIR filters that guarantee linear phase behavior in the frequency domain? I'll declare two things to convince you to continue reading. Declaration# 1: "That the coefficients must be symmetrical" is not a correct 50,000th Member Announced! January 11, 2010 In my last post, I wrote that DSPRelated.com was about to reach the 50,000 members mark.  Well, I am very happy to announce that it happened during the holidays, and the lucky person is Charlie Tsai from Taiwan.  Charlie is an assistant professor in the Department of Electrical Engineering at the National Central University in Taiwan where he teaches the "Biomedical Signal Processing" class.  He is also the advisor of the Almost 50,000 Members! November 26, 20091 comment I am very happy to announce that DSPRelated.com will reach the 50,000 registered members mark before the end of 2009. To celebrate this milestone, I will buy a BMW 5 to the 50,000th person to register (please make sure to confirm you email address to activate your registration).  Please read the fine prints after the picture. I am just having fun here and it's not even April's fool day.  The 50,000th member won't get a BMW (I wish I could offer it!),... DSPRelated faster than ever! if you are visiting DSPRelated.com on a regular basis, you should observe that the site loads significantly faster in your browser than it used to, especially if you are in Europe or in Asia.  The main reason for this is that I am now using Amazon's CloudFront service for the delivery of most static content on DSPRelated.com (images, javascripts, css).   The cloudFront service automatically detects the location of a visitor and will deliver the static content from the server... New Papers / Theses Section March 21, 20081 comment The new 'Papers & Theses' section is now online: http://www.dsprelated.com/documents.phpThe idea is to list and organize in one place as many DSP related dissertations (PhD & Masters) and papers/articles as possible.If you are the author of a thesis or paper and would like to have it listed on DSPRelated.com, please follow these steps:- Make sure that you are allowed to share the document online (copyright).- If you don't already have one, make a 'pdf' copy of your document. ...
{}
Measurement of P, D, R, and A parameters at small angles for p‐p elastic scattering at 310, 390 and 490 MeV View Affiliations Hide Affiliations AIP Conf. Proc. 41, 50 (1978) • Conference date: 27-30 June 1977 USD /content/aip/proceeding/aipcp/10.1063/1.31218 http://aip.metastore.ingenta.com/content/aip/proceeding/aipcp/10.1063/1.31218 /content/aip/proceeding/aipcp/10.1063/1.31218
{}
# Dividing polynomials by monomials ### Dividing polynomials by monomials This section will teach us how to divide a polynomial (more than one term) by a monomial (one term only). We will use a model to help us on the division. We will then try to solve the questions without using the model. At the end, we will look at some of the related word problems. #### Lessons • 1. a) How to divide polynomials by monomials? • 2. Divide by using a model. a) $\frac{{6{x^2} - 24x}}{{3x}}$ b) $\frac{{ - 5{x^2} - 7x}}{{ - x}}$ • 3. Divide. a) $\frac{{4{x^2} + 12xy}}{{2x}}$ b) $\frac{{3.5{x^2} + 2.1x}}{{7x}}$ c) $\frac{{ - {x^2} - 1.8xy}}{{6x}}$ d) $\frac{{ - 18{x^2} - 9x + 0.3}}{{0.3}}$ • 4. The volume of the diagram below is $45{x^2} + 3x$. a) Write the polynomial expression for the width of the diagram. b) If x = 3 m, calculate the width and the volume.
{}
# Chapter 1 - Early America Total Video: 01:38:00 Chapter One - Early America “Heaven and Earth never agreed better to frame a place for man’s habitation.” Jamestown founder John Smith, 1607 ## Introduction Perhaps as many as 500 distinct tribes inhabited North America before the Columbian encounter in the late fifteenth century. However, after that encounter, Europeans tended to view these early Americans as a single culture and attempted to apply a single policy in dealing with them—a mistake the United States would repeat after its Revolution hundreds of years later. In reality, there were more cultures and cultural differences in the Americas in 1500 than there were in Europe at that same time. Video 00:1:20 links to: Meteor Crater Video (https://ensemble.nmc.edu/Watch/n5K7MjRt) ## Origins Who were these earliest Americans and where did they come from? Many Europeans of that day speculated that these were the lost tribes of Israel or possibly the descendants of Egyptians who had migrated to the New World. In the past century it has become widely accepted that these first Americans migrated from Asia via a land bridge connecting modern day Russia and Alaska. This theory holds that an ice-cap of up to two miles in thickness covered much of North America during an ice-age more than 10,000 years ago. With so much water trapped in the ice, ocean levels would have dropped hundreds of feet, exposing the Bering Strait which is less than 200 feet deep. This is currently the most widely held theory of ancient American origins. Video 00:02:58 links to:Petrified Forest Video (https://ensemble.nmc.edu/Watch/n5K7MjRt) ## Origins Too? Some interesting scholarship is going on right now which calls this theory into question. While it has long been assumed by Western scholars that the earliest Americans migrated from Asia, new finds are beginning to stir things up a bit. A growing number—though certainly not a majority—of scholars are speculating as to whether the first Americans were possibly from the South Pacific, Africa or possibly even Europe. The topic is absolute dynamite right now for obvious reasons. The bones of Kennewick Man, discovered in Washington’s Columbia River in 1996, have been dated to 9,600 years. Initial examinations concluded that this was a Caucasian murder victim from the very recent past. Later testing indicated a Southeastern Asian ethnicity (not the Northeastern ethnicity one would expect from the land bridge theory). The presence of ossification in the bones led examiners to conduct radiocarbon dating which touched off a controversy between scientists, Native Americans and the Government. This ended in the discovery site being covered under tons of stone and dirt by the Army Corps of Engineers toward the end of the Clinton Administration in the late 1990s. ## More Origins? Other discoveries have indicated that the Americas were populated by humans much earlier than a recent ice-age migration model can support. In Cactus Hill Virginia, ancient tools which pre-date the Bering Strait Land  hypothesis bear a resemblance to European tool making. Recent DNA studies also possibly indicate shared DNA between modern Northeastern American Indians and Europeans. "Luzia Woman" was found in Brazil and dated to about 11,500 years. She has been described as possibly African, possibly Australian and just about everything but the expected Mongoloid. All of this is highly speculative and should make for exciting debate for decades to come. ## Early American Life How did early North Americans live? Many believe that the earliest of early Americans were hunter-gatherers who, having populated much of the North American West Coast, then followed the bison and mastodon herds and receding glaciers back northward in an effort to remain in a familiar environment. Video 00:01:32 link to: Old Faithful video (https://ensemble.nmc.edu/Watch/w3ALt64H) Others of these early North Americans may also have been nascent agriculturalists who remained in place and learned to live in a changing environment by growing what they needed to survive. Curiously, it would be need rather than plenty, which would lead to population increase and culture building as the most complex north American civilizations have been found in some of the most arid and difficult climates. ## Pleistocene Age The most prominent theories in geology, archaeology and anthropology today suggest a Pleistocene Age between 75,000 B.C and 8,000 years BC in which much of North America was covered by an ice-cap of between one and two miles thick. It is theorized that with so much water trapped in this huge amount of ice, the level of the world’s oceans would have dropped by several hundred feet. If so, then this would have revealed the ocean floor in the Bering Strait between present-day Alaska and Russia because the Strait is only 180 feet deep. It is widely believed that most North Americans were hunter-gatherers during this period—living off an abundance of mammoth, mastodon, and bison near the southern edges of the North-American glacial blanket. ## Archaic Age Inquiring of the same theories will lead us to a glacial retreat beginning around 8,000 BC and to the naming of a new era—the “Archaic Age.” It is reasonable to think that as the glaciers retreated northward, that weather patterns would change in response. Animals seeking familiar weather patterns would have followed the glaciers north as would many who hunted those animals. Others, choosing to stay, would learn to adapt to a more arid environment through tool-making and invention to bring out of the ground what they were no longer getting from the mammal herds—namely, calories. Stone hoes for turning the earth, small channels to move water and improved arrow points to bring down smaller game contributed to the building of culture by allowing people to create food surpluses, remain stationary, and establish rituals, social structures and traditions. Paleoenvironmental Atlas of Beringia with animation showing the coastline after so many years. Video Death Trap  00:01:11(https://ensemble.nmc.edu/Watch/m9JLi2b4) By 3,000 B.C., a primitive type of corn was being grown in the river valleys of New Mexico and Arizona. Then the first signs of irrigation began to appear, and, by 300 B.C., signs of early village life. By the first centuries A.D., the Hohokam were living in settlements near what is now Phoenix, Arizona, where they built ball courts and pyramid-like mounds reminiscent of those found in Mexico, as well as a canal and irrigation system. Video Death Mounds  00:01:16 (https://ensemble.nmc.edu/Watch/Yq5w6KGr) ## Golden Age The "Golden Age,” a two-and-a-half-millennium period between 1200 BC and 1250 AD, saw the continued development of irrigation and agriculture in Southwestern North America, a nomadic hunting culture on the Great Plains and a tendency toward a mix of hunting and farming tribes East of the Mississippi River. The first great civilizations in North America tended to arise near to, and West of, the Mississippi River. If necessity truly is the mother of invention, then it makes sense that some tribes in the Southwest, lacking a steady supply of animals, would seek vegetable sustenance, learn that vegetables do not run away, become proficient at raising them, build more permanent dwellings in prime agricultural regions and realize a regular surplus and population growth as they cooperated to further the community. Video Mesa Verde  00:10:47  (https://ensemble.nmc.edu/Watch/Yw3m7PMn&sa=D&ust=1462823685957000&usg=AFQjCNHCXo1alG4yfdjtzrl9o8Iaqpx7Ug) Once a regular surplus became the norm, some members were freed up to serve as political and religious figures, hence, cultural development, societal structure and all the trappings not usually associated with hunter-gatherer tribes. Video Woodland Culture 00:01:13 (https://ensemble.nmc.edu/Watch/Ms29Xck8) The first Native-American group to build mounds in what is now the United States often are called the Adenans. They began constructing earthen burial sites and fortifications around 600 B.C. Some mounds from that era are in the shape of birds or serpents; they probably served religious purposes not yet fully understood. The Adenans appear to have been absorbed or displaced by various groups collectively known as Hopewellians. One of the most important centers of their culture was found in southern Ohio, where the remains of several thousand of these mounds still can be seen. Believed to be great traders, the Hopewellians used and exchanged tools and materials across a wide region of hundreds of kilometers. ## Cahokia By around 500 A.D., the Hopewellians disappeared, too, gradually giving way to a broad group of tribes generally known as the Mississippians or Temple Mound culture. One city, Cahokia, near Collinsville, Illinois, is thought to have had a population of about 20,000 at its peak in the early 12th century. At the center of the city stood a huge earthen mound, flattened at the top, that was 30 meters high and 37 hectares at the base. Eighty other mounds have been found nearby. Cities such as Cahokia depended on a combination of hunting, foraging, trading, and agriculture for their food and supplies. Influenced by the thriving societies to the south, they evolved into complex hierarchical societies that took slaves and practiced human sacrifice. Video Cahokia. 00:09:44 (https://ensemble.nmc.edu/Watch/j4MSt9n6&sa=D&ust=1462823685970000&usg=AFQjCNFX7UOnW1YREMwk3v12WvkidHHZ1g) Cahokia was sustained by agriculture, small game and cooperation. It is thought that their earthen mounds so impressed visitors that a mound-building culture spread to as much as half of the current-day continental United States. Several mounds have been found as far north as Cadillac in northern Michigan. Cahokia was undone by its own success several hundred years before the Columbian encounter. Continual deforestation around the city drove the small game away and removed most protein from this people’s diet. Their mounds remain as a testament to their industry and way of life. ## Pre-contact North America “Pre-Contact North America” refers to that quarter-millennia period before Columbus landed in the “New World.” This was actually a period of cultural decline for North Americans in both the East and the West. Out west, a nearly two-decade drought, coupled with predatory raids from the Athapascan people drove the agricultural Mogollon and Anasazi out of their significantly developed settlements (see chapter photo at the beginning). These would reconstitute themselves as the Hopi, Zuni and Pueblo peoples. The Athapascan raiders would become the Apache and Navaho and would continue in a parasitic relationship with those whom their ancestors had driven out earlier. Nobody is quite sure what happened in the East. The lack of cultural ties to the Golden Age leaves researchers scratching their collective heads. Clearly, the Eastern tribes lost ground culturally, possibly for the same reasons as the Western tribes but it remains unclear. Video Pre-Columbian Landscape 0:57 (https://ensemble.nmc.edu/Watch/t9AJj86D) ## On the Verge On the verge of contact with Europeans in 1492, Native Americans had settled (though usually quite sparsely) in most regions of North America. The hundreds of tribal groups, languages, dialects and economic systems were more varied than was Europe of that same time. Contrasted to the mono-cultural hunter-gatherers theorized about in the Pleistocene Era, North Americans had indeed adapted to new ways of living and of being. ## Northwest Perhaps the most affluent of the pre-Columbian Native Americans lived in the Pacific Northwest, where the natural abundance of fish and raw materials made food supplies plentiful and permanent villages possible as early as 1,000 B.C.. The opulence of their “potlatch” gatherings remains a standard for extravagance and festivity probably unmatched in early American history. To the extent that farming could be found in this region, so could slavery. The women who worked the fields would encourage the men to raid for the purpose of obtaining slaves who would lessen the field work for these women. These slaves would have one achilles tendon severed so as to prevent escape and to render any dreams of return to a former existence hopeless. ## West Coast Further south, the original inhabitants of California lived a relatively easy existence of fishing, gathering plentiful acorns, and generally not engaging in civic development as this was not necessitated due to an abundance of game and gatherables like nuts and wild berries. ## Southwest In what is now the southwest United States, the Anasazi, ancestors of the modern Hopi Indians, began building stone and adobe pueblos around the year 900. These unique and amazing apartment-like structures were often built along cliff faces; the most famous, the “cliff palace” of Mesa Verde, Colorado, had more than 200 rooms. Another site, the Pueblo Bonito ruins along New Mexico’s Chaco River, once contained more than 800 rooms. The dry conditions forced the inhabitants to cooperate to obtain the most from scarce water resources. Small irrigation canals watered the maize which sustained the Pueblo peoples. The Pueblo peoples themselves were farmed by the Navaho and Apache raiders who would take what they needed from these farmers, being careful not to do so much damage as to prevent next year’s cycle of crops and raids. ## Great Plains On the Plains, a hunting culture was able to sustain itself on the tens of millions of Bison which dwelt there. Bison were so plentiful that these hunters were able to gather as many animals as they needed simply by setting range fires and frightening entire herds over cliffs—usually to obtain only a few animals. The later introduction of the horse by the Spanish would curtail this method because mounted hunters were able to bring down game with greater precision and efficiency. In fact, the introduction of the horse may have extended the era of the great bison herds until the time of Railroad expansion in the United States. ## The East In the Great Lakes region, a combination of fishing, hunting, limited agriculture and wild rice gathering sustained the people. In the more humid east, maize was the staple of most tribes, supplemented by occasional hunting forays for bison in the west. In the Southeast, plentiful game, agriculture and some agricultural slavery were present. Video Wild Rice  0:53 (https://ensemble.nmc.edu/Watch/Ce82Biz6) ## Lifeways Regarding customs, religious observance and culture at large, several factors influenced most tribes. For example, hunter-gatherers tended to be mobile, male-dominated (patriarchal) and individualistic in both their governance and religion. Agricultural tribes tended to stay in one place, emphasize political and religious community, have more elaborate religious rites, more elaborate political structure and more elaborate and permanent architecture. Hunter-gatherers tended to reckon the family line through the father, while agricultural tribes tended to reckon through the mother and were matriarchal. Tribes which blended hunting and agriculture (like many of the eastern tribes the European colonists would first encounter) tended to assign hunting to males and farming to females and have a mix of patrilineal and matrilineal family reckoning. ## Limitations Specific limitations would prevent North Americans from putting up effective resistance to Europeans. First, the lack of resistance to European and African diseases decimated the North Americans. In many cases, disease would have wiped out more than half of a tribe’s population before they ever saw a white person. Coupled with this was the lack of wheel and metallurgical technology as well as efficient use of beasts of burden. Because North Americans did not use the wheel, nor harness oxen and horses, they could not move nearly as much equipment as Europeans. Because they had no metallurgy, they could not forge steel weapons. All of these factors would conspire to put North Americans at a distinct disadvantage to European colonists beginning in 1492. Video: (00:09:00)  Pre-Columbian West  (http://fod.infobase.com/p_ViewPlaylist.aspx?AssignmentID=TDK2AQ) This video is accessible to NMC students only (login required). ## THE ENDURING MYSTERY OF THE ANASAZI Time-worn pueblos and dramatic cliff towns, set amid the stark, rugged mesas and canyons of Colorado and New Mexico, mark the settlements of some of the earliest inhabitants of North America, the Anasazi (a Navajo word meaning “ancient ones”). By 500 A.D. the Anasazi had established some of the first villages in the American Southwest, where they hunted and grew crops of corn, squash, and beans. The Anasazi flourished over the centuries, developing sophisticated dams and irrigation systems; creating a masterful, distinctive pottery tradition; and carving multi-room dwellings into the sheer sides of cliffs that remain among the most striking archaeological sites in the United States today. Yet by the year 1300, they had abandoned their settlements, leaving their pottery, implements, even clothing — as though they intended to return — and seemingly vanished into history. Their homeland remained empty of human beings for more than a century — until the arrival of new tribes, such as the Navajo and the Ute, followed by the Spanish and other European settlers. The story of the Anasazi is tied inextricably to the beautiful but harsh environment in which they chose to live. Early settlements, consisting of simple pithouses scooped out of the ground, evolved into sunken kivas (underground rooms) that served as meeting and religious sites. Later generations developed the masonry techniques for building square, stone pueblos. But the most dramatic change in Anasazi living was the move to the cliff sides below the flat-topped mesas, where the Anasazi carved their amazing, multi-level dwellings. The Anasazi lived in a communal society. They traded with other peoples in the region, but signs of warfare are few and isolated. And although the Anasazi certainly had religious and other leaders, as well as skilled artisans, social or class distinctions were virtually nonexistent. Religious and social motives undoubtedly played a part in the building of the cliff communities and their final abandonment. But the struggle to raise food in an increasingly difficult environment was probably the paramount factor. As populations grew, farmers planted larger areas on the mesas, causing some communities to farm marginal lands, while others left the mesa tops for the cliffs. But the Anasazi couldn’t halt the steady loss of the land’s fertility from constant use, nor withstand the region’s cyclical droughts. Analysis of tree rings, for example, shows that a drought lasting 23 years, from 1276 to 1299, finally forced the last groups of Anasazi to leave permanently. Although the Anasazi dispersed from their ancestral homeland, their legacy remains in the remarkable archaeological record that they left behind, and in the Hopi, Zuni, and other Pueblo peoples who are their descendants.
{}
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> # Chapter 6: Exponents and Exponential Functions Difficulty Level: Advanced Created by: CK-12 ## Introduction Here you'll learn all about exponents in algebra. You will learn the properties of exponents and how to simplify exponential expressions. You will learn how exponents can help you write very large or very small numbers with scientific notation. You will also learn how to solve different types of exponential equations where the variable appears as the exponent or the base. Finally, you will explore different types of exponential functions of the form and as well as applications of exponential functions. Chapter Outline ## Summary You learned that in an expression like , the "2" is the base and the "x" is the exponent. You learned the following laws of exponents that helped you to simplify expressions with exponents: You learned that scientific notation is a way to express large or small numbers in the form where You learned to solve exponential equations with variables in the exponent you should try to rewrite the equations so the bases are the same. Then set the exponents equal to each other and solve. If the equation has a variable in the base you can try to get rid of the exponent or, make the exponents on each side of the equation the same and then set the bases equal to each other and solve. Finally, you learned all about exponential functions. You learned for exponential functions of the form , if then the function is decreasing and represents exponential decay. If then the function is increasing and represents exponential growth. Exponential functions are used in many real-life situations such as with the decay of radioactive isotopes and with interest that compounds. Show Hide Details Description Difficulty Level: Authors: Tags: Subjects: Search Keywords:
{}
If $a,b,c \in \mathbb{Z}$ and $a^2 - b^2 = c$ then $a = \frac{m+n}{2}, b = \frac{m-n}{2}$ [duplicate] Let $a,b,c \in \mathbb{Z}$. Proove that if $a^2 - b^2 = c$ then exists $m,n \in \mathbb{Z}$ which are both even/odd such that $a = \frac{m+n}{2}, b = \frac{m-n}{2}, c = mn$ I think I should use Fermat's theorem, but I'm not sure how to do it marked as duplicate by Martin R, user491874, Community♦Jan 7 '18 at 17:46 • Doesn't $a^2+b^2=c$ then turn into $m^2+n^2=mn$? if both $m$ and $n$ are odd it cannot hold – user310648 Jan 7 '18 at 17:42 Suppose $c$ is odd. Then $c-1$ and $c+1$ are even, implying $\frac{c-1}{2}$ and $\frac{c+1}{2}$ are integers. Then take $m=c,n=1$ getting integral solutions for $a,b$. If $c$ is even then $c=2^nk$ for some odd integer $k$, where $n$ is the highest power of $2$ in $c$. If $n\ge 2$, take $m=2^{n-1}k$ and $n=2$. If $n=1$, then $a^2+b^2=c$ has no solution of the above form. Butobviously it might have other solutions.
{}
# What is the graph of f(x) = 2x^2 - 3x + 7? Sep 27, 2014 To graph a quadratic equation we first need to factorise it into a different form. First we check what the discriminant is equal to Where $f \left(x\right) = a {x}^{2} + b {x}^{2} + c$ $\Delta$(Discriminant)$= {b}^{2} - 4 a c$ In this case $\Delta$=${3}^{2} - 4 \cdot 2 \cdot 7$ $\Delta = - 47$ Because it is less than zero it can't be factored normally Therefore we must use the The Quadratic Formula or Completing the Square Here I have completed the square $f \left(x\right) = 2 {x}^{2} - 3 x + 7$ Remove factor from ${x}^{2}$ term $f \left(x\right) = 2 \cdot \left({x}^{2} - \frac{3}{2} x + \frac{7}{2}\right)$ Take $x$ term, half it and then square it $f \left(x\right) = - \frac{3}{2} \to - \frac{3}{4} \to \frac{9}{16}$ Add and then subtract this number inside the equation $f \left(x\right) = 2 \cdot \left({x}^{2} - \frac{3}{2} x + \frac{9}{16} - \frac{9}{16} + \frac{7}{2}\right)$ Combine the first three terms in a perfect square $f \left(x\right) = 2 \cdot \left({\left(x - \frac{3}{4}\right)}^{2} - \frac{9}{16} + \frac{7}{2}\right)$ Equate left over terms $f \left(x\right) = 2 \cdot \left({\left(x - \frac{3}{4}\right)}^{2} + \frac{47}{16}\right)$ Multiply coefficient back in $f \left(x\right) = 2 {\left(x - \frac{3}{4}\right)}^{2} + \frac{47}{8}$ This gives a turning point of $\left(\frac{3}{4} , \frac{47}{8}\right) = \left(0.75 , 5.875\right)$ and a $y$ intercept of $2 \cdot {\left(\frac{3}{4}\right)}^{2} + \frac{47}{8}$ $= \left(0 , 7\right)$
{}
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # Should I Intervene? ## Should I Intervene? (OP) Hello All, I am the only mechanical engineer at my company, but I am not the one that is always gone to for engineering things, whatever. Point is, I have noticed that my boss has decided to task one of my coworkers with testing various rubber hoses and how they perform with fuel permeation. Now being that my boss ordered our hose according to SAE J30R9 specs, you would think that he looked at the testing section, right? As far as I can tell, that may not be the case, as neither my boss nor coworker have referenced the SAE J30R9 Testing Section, nor the next publication you would need, ASTM D471. As the engineer here should I give a damn? I just think it's silly to be spending time not testing stuff the right way, or to not use an industry/world standard that actually means something, especially when we're testing to show why ours beats two other companies. Am I just being a neurotic engineer? ### RE: Should I Intervene? If the coworker is receptive to advice, I would lend advice. If your boss is smart and will listen, you may want to question whether he is aware of the testing protocols. It won't help anyone nor make the process work any better if mistakes are made. Andrew H. www.mototribology.com ### RE: Should I Intervene? (OP) I've briefly discussed the task with coworker the other day when I saw this getting set up, but I'll get some more info on the testing methods tomorrow, he had to leave early today. My boss might listen, but usually I have to deal with ideologue behavior which makes no sense because he literally hired me to do the things he doesn't know or have time, right? Since the test results (insert any engineering aspect to be used for marketing) are planned for marketing to car guys; in my 6 years experience here I think he believes/knows it doesn't need to be that proper for car guys? But if we ever want to be cleared to sell to defense/aerospace/professional companies like he mentions, it seems like doing it the right way the first time make more sense? Like I said, just the engineer trying to prevent all future issues... Our hose is already certified by the NHRA, so I really don't even know why we're wasting time with this AT ALL now?! ### RE: Should I Intervene? You are not being neurotic. Testing requires a meticulous, sometimes even pedantic approach. Lots of accredited testing labs daily perform seemingly simple tests like hardness, without apparently reading not just the applicable ASTM standard but the other documents it links to. If these businesses are consistently getting it wrong, what chance does your boss have? Not being familiar with the details of your testing, I would just say: place your testing in a risk framework. In other words, what is the probability of something going wrong, and what could be the consequence to equipment and life if it did? Maybe your organization would comprehend better if they had things explained to them in those terms. Oh, and document everything you do and say, including verification of ALL of the relevant steps in testing and product validation. For your own protection and for just good basic QC practice. "Everyone is entitled to their own opinions, but they are not entitled to their own facts." ### RE: Should I Intervene? If the goal is a marketing differentiation from competing products then testing "to the standard" doesn't get there. Everyone "meets the standard". You want to claim "exceeds the standard". Or possibly even better, be able to claim that your product does something amazing that the standard doesn't even consider. Our milk is caffeine and gluten free! ### RE: Should I Intervene? (OP) That's great advice ironice_metalluurgist (awesome username btw) and I think maybe that's why this sandbox style test is perhaps being done? I am just more concerned about the blow-back from the other two competitor companies if they are going to try and fact-check us, for example. ### RE: Should I Intervene? From the standpoint of a past product designer who routinely tested to ASTM standards for marketing purposes against competitors who liked to make up their own versions of standard tests, it was extremely easy for me to dismantle our competitors' claims when talking to customers. What always worked for me and built up credibility for myself and the brand I was working for was to follow the standard testing exactly and make sure our product exceeded the minimum and what our competitors claimed. Creating tests outside of the standards for marketing purposes makes most companies that do it look like a joke in my opinion. If there is enough documentation and justification behind unique testing, it might pass muster and be legitimate, but tests like that were far and few between in my experience. Andrew H. www.mototribology.com ### RE: Should I Intervene? (OP) I see your point there as well MintJulep (another cool SN on this thread) and perhaps that's the intent of this test in certain regards; I'll know a bit more later for sure. Even when following the standards, the variances in results, through any means of analysis will inherently separate out the non-standard results/aspects of the competing hoses as well. Whether that mandates retesting or a discussion of valid, yet interesting results in the end is a different story of course. For example, after performing a test with valid results, we can say that Competitor A & B's hoses smell more like gasoline (subjective BS however) or that one of the hoses now has an observable/physical change in material composition after experiencing gasoline. It's just weird, because what we're testing for, permeation, should be tested using ASTM D471.... ### RE: Should I Intervene? (OP) SuperSalad, I agree with you, and whether people want to call you an ASTM shill or not is up to them ;). To me, it's almost utterly disrespectful to the hard work that ASTM's team is putting into this. It's not just a standard, it was likely a year (or more) of school/research lab-like work, it's making sure that it's fact verified on 100% of it's aspects and cited texts, publication, standard, etc. It's making sure that the previous test didn't have issues or inaccuracies, so that when it comes up for renewal, it's sure to still be correct and relevant, otherwise they will actually be corrected and revised!! That's awesome to me!! It's HYSTERICAL that people don't turn to stuff like this first, imo. But hey, I work smarter, not harder most times. That's all I'm worried about to, some bung-hole engineer wanting to dismantle our claims haha! ### RE: Should I Intervene? Remember that as well written as ASTM standards are (and IMO they are the gold standard) their primary purpose is not to put too many companies out of business. Here in the metal fabrication business I routinely add requirements or encounter them in technical specifications. When you make a product that depends on its public reputation you almost certainly want to go well above the 'minimum' requirements. "Everyone is entitled to their own opinions, but they are not entitled to their own facts." ### RE: Should I Intervene? The only plausible reason to NOT use standard tests, weak though it might be, is if you know that you will fail miserably, and you need some bright, shiny, results that you can desperately cling to. Barring that, any discussion with your manager or coworker should lead with, "BTW, since these results are going to be publicized, shouldn't we be testing to the industry-accepted standards?" The whole point of standards, beyond good practice, is to level the playing field so that results can be compared and evaluated. I know that if I were to suggest using some unknown or even semi-known process for measuring MTF at work, they'd laugh at me and say, "Why aren't we going with the ISO standard?" TTFN (ta ta for now) I can do absolutely anything. I'm an expert! https://www.youtube.com/watch?v=BKorP55Aqvg FAQ731-376: Eng-Tips.com Forum Policies forum1529: Translation Assistance for Engineers Entire Forum list http://www.eng-tips.com/forumlist.cfm ### RE: Should I Intervene? Think Firestone 500 tires. "Everyone is entitled to their own opinions, but they are not entitled to their own facts." ### RE: Should I Intervene? IRstuff, It would take the average lawyer less than 5 minutes to turn an 'optional' testing standard into a 'legally required' testing standard. The same with non-mandatory appendices in the ASME B&PV Code - I treat them all as mandatory. "Everyone is entitled to their own opinions, but they are not entitled to their own facts." (OP) Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
{}
# 1. As a manager, you must choose between two inventory management software packages. One is a... 1. As a manager, you must choose between two inventory management software packages. One is a stand-alone package that only manages inventory. It allows users to define their own reports without much training. The other requires a professional programmer for new reports, but it is part of an ERP system that can handle much more than inventory management. Describe how you would choose between the two packages. 2. Mando’s inventory at the end of 2011 was valued at approximately$300million.Considerasmallbicyclestore whose inventory is valued at$30,000, about 1/10,000 of Mando’s figure. In what ways are its inventory reporting needs similar? In what ways are they different?
{}
# On a characterization of primitive polynomials over a finite field Let $K$ be a finite field. Let us define a primitive polynomial as an $f \in K[X]$ s.t. the multiplicative order of $X$ in $K[X]/(f)$ is equal to $|K|^{\deg f} - 1$. I want to show that $f \in K[X]$ is primitive if and only if $f$ is irreducible and $X$ generates the multiplicative group $(K[X]/(f))^\times$. I would like to ask how to show this. I already showed that if $f$ is primitive and irreducible the latter half of the condition holds, but I cannot figure out the rest. I would also like to know if it is customary to talk about the multiplicative order of an element of a ring whose multiplicative part is not necessarily a group. - I think you mean "multiplicative order of $X$." Also, the multiplicative group has $|K|^{\deg f}-1$ elements at most, so $X$ can never have the order you give in your problem. –  Thomas Andrews Mar 10 '13 at 4:10 ## 1 Answer I assume that the definition of primitive includes that $X$ is relatively prime to $f$ (since if not, $X$ has no well-defined multiplicative order in $K[X]/(f)$.) If $f$ has a nontrivial divisor $g$, there are at least two elements of $K[X]/(f)$, $0$ and $g$, which do not have multiplicative inverses, so $K[X]/(f)$ has at most $|K|^{\deg f}-2$ invertible elements. However, if $f$ is primitive, then, by definition, $K[X]/(f)$ contains $|K|^{\deg f}-1$ powers of $X$, all of which are invertible. Therefore, if $f$ is primitive, then it is also irreducible. This reduces the problem to showing that if $f$ is irreducible, then $f$ generates the multiplicative group of $K[X]/(f)$ if and only if $f$ has order $|K|^{\deg f}-1$. But this is obvious because the group has order $|K|^{\deg f}-1$. -
{}