arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
Revision history [back] Multiline equation write form in text cell? Is there a way to write equation within text cell as few lines? For example this: $f(x) = y^x$ renders normally, but this: $f(x) = y^x$ renders as plain text. I wan it to be rendered in the same way as first one. P.S. I am new to latex, so trying to find a way to make complex equation in latex more readable (may be there is a better way to do it than splitting it to a few lines?).
# Calculating probability of a normal distribution, not getting correct answer I'm doing a homework assignment and having some trouble matching the correct answers from my professor. As a reference, I'm calculating these answers using R. The question is as follows: Assuming $E(X_i) = \mu = 7$, $Var(X_i) = \sigma^2 = 0.12$, and $\bar{X_{50}} \sim N(7,0.12)$ What is $P(\mu - 2 \times \frac{\sigma}{\sqrt{50}} \leq \bar{X_{50}} \leq \mu + 2 \times \frac{\sigma}{\sqrt{50}})$? The answer my professor gave is $0.9545$. He also stated that I don't need to know the values of $\mu$ and $\sigma$. I don't know how he got this though? Doing the easy thing and plugging in 7 for $\mu$ and 0.12 for $\sigma^2$ I ended up getting: $P(6.90202041 \leq \bar{X_{50}} \leq 7.09797959) = P(\bar{X_{50}} \leq 7.09797959) - P(\bar{X_{50}} \leq 6.90202041)$ Then in R I did pnorm(7.09797959,7,0.12) - pnorm(6.90202041,7,0.12) and got the result $0.5857838$ My question is what did I do wrong, and what should I be doing? This is the way he told us to do inequalities for probabilities like the one above. Also how could this be solved without having to use $\mu$ and $\sigma$? Consider the more general probability $$\Pr\left[ \mu - z \frac{\sigma}{\sqrt{n}} < \bar X < \mu + z \frac{\sigma}{\sqrt{n}} \right],$$ for some $z > 0$, where $$\bar X = \frac{1}{n} \sum_{i=1}^n X_i$$ is the sample mean of $n$ independent and identically distributed observations from a normal distribution with mean $\mu$ and standard deviation $\sigma$. Then the sampling distribution of the sample mean is also normally distributed, with the same mean $$\mu_{\bar X} = \mu$$ but with standard deviation $$\sigma_{\bar X} = \frac{\sigma}{\sqrt{n}}.$$ Therefore, if we standardize $\bar X$, we find that the above probability is equivalent to $$\Pr\left[ -z < \frac{\bar X - \mu}{\sigma/\sqrt{n}} < z \right],$$ and the random variable $$Z = \frac{\bar X - \mu}{\sigma/\sqrt{n}}$$ is a standard normal random variable with mean $\mu_Z = 0$ and standard deviation $\sigma_Z = 1$. So the above probability is simply $$\Pr[|Z| < z] = 1 - 2 \Phi(-z) = 2\Phi(z) - 1.$$ Now you can see why your professor said that the values of $\mu$ and $\sigma$ do not matter: what matters is the constant $z = 2$. Looking up $\Phi(2) \approx 0.97725$ in a standard normal table, or using the R command pnorm(2,0,1), we then obtain the desired probability $$2(0.97725) - 1 = 0.9545,$$ as claimed. Now, let's see why your calculation didn't work. First, we have to be clear about the difference between the normal distribution from which individual observations are drawn, and the normal distribution that represents the mean of the set of observations, which we called "the sampling distribution of the sample mean." Both distributions are normal, and both have the same mean, but the sampling distribution has a standard deviation that is a decreasing function of the sample size: $\sigma_{\bar X} = \sigma/\sqrt{n}$. Consequently, if individual observations follow a $\operatorname{Normal}(7,0.12)$ distribution, the sample mean of $n = 50$ such observations follows a $\operatorname{Normal}(7,0.0169706)$ distribution. This, in addition to an apparent arithmetic error, is why your pnorm() command is incorrect, because you are using the wrong standard deviation. The correct R command should be pnorm(7+2*0.12/sqrt(50),7,0.12/sqrt(50))-pnorm(7-2*0.12/sqrt(50),7,0.12/sqrt(50))
# HAMLAT: A HAML-based authoring tool for haptic application development ### Abstract Haptic applications have received enormous attention in the last decade. Nonetheless, the diversity of haptic interfaces, virtual environment modeling, and rendering algorithms have made the development of hapto-visual applications a tedious and time consuming task that requires significant programming skills. To tackle this problem, we present a HAML-based Authoring Tool (HAMLAT), an authoring tool based on the HAML description language that allows users to render the graphics and haptics of a virtual environment with no programming skills. The modeler is able to export the developed application in a standard HAML description format. The proposed system comprises three components: the authoring tool (HAMLAT), the HAML engine, and the HAML player. The tool is implemented by extending the 3D Blender modeling platform to support haptic interaction. The demonstrated tool proves the simplicity and efficiency of prototyping haptic applications for non-programmer developers or artists. Type Publication Haptics: Perception, Devices and Scenarios Date BibTeX @article{eid2008hamlat, title={HAMLAT: A HAML-based authoring tool for haptic application development}, author={Eid, Mohamad and Andrews, Sheldon and Alamri, Atif and El Saddik, Abdulmotaleb}, journal={Haptics: Perception, Devices and Scenarios}, pages={857–866}, year={2008}, publisher={Springer} }
Code should execute sequentially if run in a Jupyter notebook # Setting up Your Julia Environment¶ ## Overview¶ In this lecture we will cover how to get up and running with Julia Topics: 1. Installation 2. Interactive Julia sessions 3. Running sample programs 4. Installation of libraries, including the Julia code that underpins these lectures ## First Steps¶ ### Installation¶ Note: In these lectures we assume you have version 0.6 or later Unless you have good reason to do otherwise, choose • The current release rather than nightly build • The platform specific binary rather than source Assuming there were no problems, you should now be able to start Julia either by • Navigating to Julia through your menus or desktop icons (Windows, OSX), or • Opening a terminal and typing julia (Linux) Either way you should now be looking at something like this (modulo your operating system — this is a Linux machine) ### The REPL¶ The program that’s running here is called the Julia REPL (Read Eval Print Loop) or Julia interpreter Let’s try some basic commands The Julia interpreter has the kind of nice features you expect from a modern REPL For example, • Pushing the up arrow key retrieves the previously typed command • If you type ? the prompt will change to help?> and give you access to online documentation You can also type ; to get a shell prompt, at which you can enter shell commands (Here ls is a UNIX style command that lists directory contents — your shell commands depend on your operating system) Below we’ll often show interactions with the interpreter as follows x = 10 10 2 * x 20 ## Installing Packages¶ In these lectures you’ll often see statements such as using Plots or using QuantEcon These commands pull in code from some of Julia’s many external Julia code libraries For the code to run, you need to install the corresponding package first Fortunately this is easy using Julia’s package management system For example, let’s install DataFrames, which provides useful functions and data types for manipulating data sets Pkg.add("DataFrames") Assuming you have a working Internet connection this should install the DataFrames package Here’s how it looks on our machine (which already has this package installed) If you now type Pkg.status() you’ll see DataFrames and its version number To pull the functionality from DataFrames into the current session we type using DataFrames Now its functions are accessible df = DataFrame(x1=[1, 2], x2=["foo", "bar"]) 2x2 DataFrame | Row | x1 | x2 | |-----|----|-------| | 1 | 1 | "foo" | | 2 | 2 | "bar" | ### Keeping your Packages up to Date¶ Running Pkg.update() will update your installed packages and also update local information on the set of available packages ### QuantEcon¶ QuantEcon is an organization that facilitates development of open source code for economic modeling As well as these lectures, it supports QuantEcon.jl, a library for quantitative economic modeling in Julia The installation method is standard Pkg.add("QuantEcon") Here’s an example, which creates a discrete approximation to an AR(1) process using QuantEcon: tauchen tauchen(4, 0.9, 1.0) Discrete Markov Chain stochastic matrix of type Array{Float64,2}: [0.945853 0.0541468 2.92863e-10 0.0; 0.00580845 0.974718 0.0194737 1.43534e-11; 1.43534e-11 0.0194737 0.974718 0.00580845; 2.08117e-27 2.92863e-10 0.0541468 0.945853] ## Jupyter¶ To work with Julia in a scientific context we need at a minimum 1. An environment for editing and running Julia code 2. The ability to generate figures and graphics One option that provides these features is Jupyter As a bonus, Jupyter also provides • Nicely formatted output in the browser, including tables, figures, animation, video, etc. • The ability to mix in formatted text and mathematical expressions between cells • Functions to generate PDF slides, static HTML, etc. Whether you end up using Jupyter as your primary work environment or not, you’ll find learning about it an excellent investment ### Installing Jupyter¶ There are two steps here: 1. Installing Jupyter itself 2. Installing IJulia, which serves as an interface between Jupyter notebooks and Julia In fact you can get both by installing IJulia However, if you have the bandwidth, we recommend that you 1. Do the two steps separately 2. In the first step, when installing Jupyter, do this by installing the larger package Anaconda Python The advantage of this approach is that Anaconda gives you not just Jupyter but the whole scientific Python ecosystem This includes things like plotting tools we’ll make use of later #### Installing Anaconda¶ If you are asked during the installation process whether you’d like to make Anaconda your default Python installation, say yes — you can always remove it later Otherwise you can accept all of the defaults Note that the packages in Anaconda update regularly — you can keep up to date by typing conda update anaconda in a terminal #### Installing IJulia¶ Now open up a Julia terminal and type Pkg.add("IJulia") If you have problems, consult the installation instructions #### Other Requirements¶ Since IJulia runs in the browser it might be a good time to update your browser One good option is to install a free modern browser such as Chrome or Firefox In our experience Chrome plays well with IJulia ### Getting Started¶ Now either 1. search for and start the Jupyter notebook application on your machine or 2. open up a terminal (or cmd in Windows) and type jupyter notebook You should see something (not exactly) like this The page you are looking at is called the “dashboard” The address localhost:8888/tree you see in the image indicates that the browser is communicating with a Julia session via port 8888 of the local machine If you click on “New” you should have the option to start a Julia notebook Here’s what your Julia notebook should look like The notebook displays an active cell, into which you can type Julia commands ### Notebook Basics¶ Notice that in the previous figure the cell is surrounded by a green border This means that the cell is in edit mode As a result, you can type in Julia code and it will appear in the cell When you’re ready to execute these commands, hit Shift-Enter instead of the usual Enter #### Switching modes¶ • To switch to command mode from edit mode, hit the Esc key • To switch to edit mode from command mode, hit Enter or click in a cell The modal behavior of the Jupyter notebook is a little tricky at first but very efficient when you get used to it #### Working with Files¶ To run an existing Julia file using the notebook you can copy and paste the contents into a cell in the notebook If it’s a long file, however, you have the alternative of 1. Saving the file in your present working directory 2. Executing include("filename") in a cell The present working directory can be found by executing the command pwd() #### Plots¶ Let’s generate some plots There are several options we’ll discuss in detail later For now lets start with Plots.jl Pkg.add("Plots") Now try copying the following into a notebook cell and hit Shift-Enter using Plots plot(sin, -2pi, pi, label="sine function") You’ll see something like this (although the style of plot depends on your installation — more on this later) ### Working with the Notebook¶ Let’s go over some more Jupyter notebook features — enough so that we can press ahead with programming #### Tab Completion¶ A simple but useful feature of IJulia is tab completion For example if you type rep and hit the tab key you’ll get a list of all commands that start with rep IJulia offers up the possible completions This helps remind you of what’s available and saves a bit of typing To get help on the Julia function such as repmat, enter ?repmat Documentation should now appear in the browser #### Other Content¶ In addition to executing code, the Jupyter notebook allows you to embed text, equations, figures and even videos in the page For example, here we enter a mixture of plain text and LaTeX instead of code Next we Esc to enter command mode and then type m to indicate that we are writing Markdown, a mark-up language similar to (but simpler than) LaTeX (You can also use your mouse to select Markdown from the Code drop-down box just below the list of menu items) Now we Shift + Enter to produce this #### Inserting unicode (e.g., Greek letters)¶ Julia supports the use of unicode characters such as α and β in your code Unicode characters can be typed quickly in Jupyter using the tab key Try creating a new code cell and typing \alpha, then hitting the tab key on your keyboard #### Shell Commands¶ You can execute shell commands (system commands) in IJulia by prepending a semicolon For example, ;ls will execute the UNIX style shell command ls, which — at least for UNIX style operating systems — lists the contents of the current working directory These shell commands are handled by your default system shell and hence are platform specific ### Sharing Notebooks¶ Notebook files are just text files structured in JSON and typically end with .ipynb A notebook can easily be saved and shared between users — you just need to pass around the ipynb file To open an existing ipynb file, import it from the dashboard (the first browser page that opens when you start Jupyter notebook) and run the cells or edit as discussed above #### nbviewer¶ The Jupyter organization has a site for sharing notebooks called nbviewer The notebooks you see there are static HTML representations of notebooks However, each notebook can be downloaded as an ipynb file by clicking on the download icon at the top right of its page Once downloaded you can open it as a notebook, as we discussed just above ## Alternatives to Jupyter¶ In this lecture series we’ll assume that you’re using Jupyter Doing so allows us to make sure that everything works in at least one sensible environment But as you work more with Julia you will want to explore other environments as well Here are some notes on working with the REPL, text editors and other alternatives ### Editing Julia Scripts¶ You can run Julia scripts from the REPL using the include("filename") syntax The file needs to be in the present working directory, which you can determine by typing pwd() You also need to know how to edit them — let’s discuss how to do this without Jupyter #### IDEs¶ IDEs (Integrated Development Environments) combine an interpreter and text editing facilities in the one application For Julia one nice option is Juno #### Text Editors¶ The beauty of text editors is that if you master one of them, you can use it for every coding task you come across, regardless of the language At a minimum, a text editor for coding should provide • Syntax highlighting for the languages you want to work with • Automatic indentation • Efficient text manipulation (search and replace, copy and paste, etc.) There are many text editors that speak Julia, and a lot of them are free Suggestions: • Atom is a popular open source next generation text editor • Sublime Text is a modern, popular and highly regarded text editor with a relatively moderate learning curve (not free but trial period is unlimited) • Emacs is a high quality free editor with a sharper learning curve Finally, if you want an outstanding free text editor and don’t mind a seemingly vertical learning curve plus long days of pain and suffering while all your neural pathways are rewired, try Vim ## Exercises¶ ### Exercise 1¶ If Jupyter is still running, quit by using Ctrl-C at the terminal where you started it Now launch again, but this time using jupyter notebook --no-browser This should start the kernel without launching the browser Note also the startup message: It should give you a URL such as http://localhost:8888 where the notebook is running Now 1. Start your browser — or open a new tab if it’s already running 2. Enter the URL from above (e.g. http://localhost:8888) in the address bar at the top You should now be able to run a standard Jupyter notebook session This is an alternative way to start the notebook that can also be handy ### Exercise 2¶ This exercise will familiarize you with git and GitHub Git is a version control system — a piece of software used to manage digital projects such as code libraries In many cases the associated collections of files — called repositories — are stored on GitHub GitHub is a wonderland of collaborative coding projects Git is an extremely powerful tool for distributed collaboration — for example, we use it to share and synchronize all the source files for these lectures There are two main flavors of Git 1. The plain vanilla command line Git version 2. The various point-and-click GUI versions As an exercise, try 1. Installing Git 2. Getting a copy of QuantEcon.jl using Git For example, if you’ve installed the command line version, open up a terminal and enter git clone https://github.com/QuantEcon/QuantEcon.jl (This is just git clone in front of the URL for the repository) Even better,
In [1]: %matplotlib inline import sdt import matplotlib.pyplot as plt import numpy as np from scipy.stats import norm Why a signal detection theory notebook?¶ Numerous times, I have looked for signal detection theory (SDT) tutorials and illustrations online, but I have never been satisfied by what I've found. The Wikipedia article is fine, but I think you need a decent amount of background to make much sense of it, and it would be better if it had some illustrations. I've found a few other things online, but none have jumped out at me as particularly good. So, here is the first of a (planned) series Python notebooks about SDT. My aim is to use both code and math to help the reader build up his or her intuitions about how SDT works and why it's useful. It would be a nice bonus if the reader ends up actually able to use SDT productively in research, too. Please feel free to get in touch with any comments or criticisms. Some caveats¶ I will note at the outset that this will be my (somewhat atypical) take on SDT. I learned about SDT as a model of perception and response selection while I was learning about Bayesian statistical modeling, and the latter strongly informs how I think about the former. I also note here that there is a lot of overlap between signal detection theory and (some parts of) statistical decision theory (e.g., Neyman-Pearson-style null hypothesis significance testing, or NHST). There is a also a lot of overlap between SDT and various aspects of machine learning classifiers and statistical models. I will probably touch on these on occasion, but they won't be my focus. Some books¶ There are numerous books on SDT, as well. I will borrow various bits and pieces from some of them. In particular, I will refer repeatedly to (and adapt liberally from) both Wickens (2002) Elementary Signal Detection Theory and Green & Swets (1966) Signal Detection Theory and Psychophysics. At least at the beginning, I will follow, very roughly, the topics that Wickens covers in his book. Some other books that may come up at some point: McNicols (2005) A Primer of Signal Detection Theory, Egan (1975) Signal Detection Theory and ROC Analysis, and MacMillan & Creelman (2005) Detection Theory: A User's Guide. I will also probably refer to journal articles here and there. The main point and an example to illustrate it¶ Here's the very short, oversimplified reason SDT is useful: it allows you to tease apart response bias and sensitivity (to a noisy signal). It can be fruitfully applied to the study of perception and memory, among a wide range of topics, some of which I'll delve into in the future. For this notebook, though, I will illustrate the basic concepts of SDT using an example similar to the examples in the first chapter of Wickens' book. The main points of this notebook (and that chapter) are to introduce and illustrate a number of concepts that are central to SDT. An auditory perception experiment¶ In Experiment 1 of this paper (pdf here), we report data and model fits for a multidimensional signal detection experiment probing the perception of frequency and duration in broadband noise stimuli. As part of this experiment (though unreported in the paper), we also had listeners identify just frequency or just duration (with the other dimension held constant). We also had them identify pairs of stimuli that differed with respect to both frequency and duration (more on this, and on multidimensional signal detection theory, in later notebooks). The table below contains the data from one listener's block of a duration identification task (csv). In this block, there were 250 trials. In all 250 trials, the stimuli consisted of broadband noise, ranging from 510 Hz to 1510 Hz. In 200 of the trials, the stimuli were 300 ms long, while in the other 50 trials, the stimuli were 250 ms long. The listeners in this experiment were told the relative presentation rates before each block and were encouraged to guess accordingly when uncertain. The rows in the table correspond to the stimuli, and the columns to responses: "short" "long" short 36 14 long 12 188 This listener was able to distinguish the short and long stimuli reasonably well, but clearly not perfectly. Perhaps the most obvious way to quantify this listener's performance is to calculate the proportion of correct responses: $\displaystyle{\frac{36+188}{250}=0.896}$. This is not unreasonable, but it's missing some interesting and important aspects of the data and of the system we're trying to learn about (a listener processing the duration of sounds). Before discussing what these aspects are, here is another table for the same listener doing the same basic task, only in this block, there were 200 trials of short stimuli and 50 trials of long stimuli (csv): "short" "long" short 189 11 long 16 34 Overall accuracy was very similar in this block: $\displaystyle{\frac{34+189}{250}=0.892}$. Okay, so, what does overall accuracy miss? In both tables, the listener produced approximately equal numbers of incorrect responses, and stimuli with exactly the same acoustic properties were presented in each block. Because one stimulus type was four times more common than the other, though, the raw number of incorrect responses makes an interesting asymmetry less obvious than it could be. Here are both tables with counts replaced by row-based proportions (i.e., counts divided by row sums, or estimates of response probabilities conditional on stimulus type): Block one: "short" "long" short 0.72 0.28 long 0.06 0.94 Block two: "short" "long" short 0.945 0.055 long 0.32 0.68 In the first block, the listener was far less likely to call a long stimulus "short" than she was to call a short stimulus "long." And in the second block, she was far less likely to call a short stimulus "long" than she was to call a long stimulus "short." More generally, in both blocks, she was far less likely to make a mistake on trials in which the more frequent stimulus type was presented than she was to make a mistake on trials in which the less frequent stimulus type was presented. In both blocks, though, the stimuli were exactly the same, so there's no reason to think that the listener would be any less (or any more) sensitive to the durational differences between the 250 ms and 300 ms stimuli. SDT gives us a powerful framework for accounting for exactly these properties of a detection system. Some terminology¶ The four possible combinations of stimulus and response have special names. In the most basic kind of SDT task, the two stimulus types consist of noise and signal + noise (e.g., white noise and white noise with a pure tone added). In the tables above, I have been implicitly treating the short stimuli as the noise stimuli and the long stimuli as the signal + noise stimuli. Using the more basic noise and signal labels, the four types of response are as follows: "No" "Yes" noise correct rejection false alarm signal miss hit Because the experimenter typically controls how many of each type of trial occurs, and because responses are expected to vary systematically with stimulus type, we are typically concerned with either the conditional probability estimates or the number of responses in each row along with the row total, as shown in the tables above. Hence, we can focus on either hits and false alarms or on correct rejections and misses, since the hit rate h is the complement of the miss rate m and the false alarm rate f is the complement of the correct rejection rate c: \begin{align*} h = 1-m\\ f = 1-c \end{align*} It is traditional to focus on h and f. As a brief aside, I note that this also serves to illustrate the basic structure of NHST. The "noise only" distribution corresponds to a null hypothesis, the "signal + noise" distribution corresponds to an alternative hypothesis, and the signal strength corresponds to the effect size. The decision rule in NHST is determined by fixing $\alpha$, the false alarm rate, at a desired value. This will probably come up again in later notebooks. Note, too, that there are a number of other quantities that can be computed from a 2 $\times$ 2 confusion matrix like this (e.g., positive predictive value, false discovery rate, etc..., see this ugly but accurate table, for example, though note that the rows and columns are switched there, relative to how I have it here). In looking at this table (and in reading papers in some fields), it's important to keep in mind that hit rate in SDT is often called "sensitivity" in other types of analyses and that correct rejection rate in SDT is often called "specificity." The former can be particularly confusing when you're dealing with SDT, since sensitivity means something very different in SDT (though the two are related). A basic SDT model¶ Per Wickens (Ch. 1), SDT relies on three assumptions: 1. The information extracted by an observer can be represented by a single number (i.e., it is unidimensional). 2. The information extracted by an observer is partially random. 3. Responses are chosen with respect to a decision criterion that exhaustively partitions this information dimension. The most common SDT model for the kind of data shown above encodes these three assumptions by modeling distributions of perceptual effects for noise and signal trials and partitioning the perceptual space with a response criterion $\lambda$. More mathematically, the perceptual effects produced by "noise only" trials are distributed as normal random variables with mean $\mu_N$ and standard deviation $\sigma_N$, the perceptual effects produced by "signal + noise" trials are distributed as normal random variables with mean $\mu_S$ and standard deviation $\sigma_S$: \begin{align*} X_N &\sim \mathrm{Normal}\left( \mu_N, \sigma_N \right)\\ X_S &\sim \mathrm{Normal}\left( \mu_S, \sigma_S \right) \end{align*} Here is an illustration of a simple Gaussian SDT model for the kind of data shown in the first table above (using a plotting function in the sdt module I created to save some space in these notebooks): In [2]: mu_N, mu_S, lam = 0, 2.14, 0.58 sdt.plot_simple_model(mu=[mu_N, mu_S], lam=lam) This model predicts $h$ and $f$ as the areas under the signal and noise distributions above the response criterion (i.e., the integrals from $\lambda$ to infinity under each probability density function): \begin{align*} h = \Pr(\text{"Yes"} | \text{S}) &= \int_\lambda^\infty f_S(x) dx = 1-\Phi\left( \frac{\lambda-\mu_S}{\sigma_S} \right)\\ f = \Pr(\text{"Yes"} | \text{N}) &= \int_\lambda^\infty f_N(x) dx = 1-\Phi\left( \frac{\lambda-\mu_N}{\sigma_N} \right) \end{align*} These integrals are indicated by the shaded regions in the figure above, red for $h$, blue for $f$. Note that $\Phi(x)$ is the standard normal CDF at $x$. We can get estimates of this particular model's $h$ and $f$ predictions by using norm.sf() (the survivor function, the complement of the CDF): In [3]: h = norm.sf(lam, loc=mu_S, scale=1) np.round(h,2) Out[3]: 0.94 In [4]: f = norm.sf(lam, loc=mu_N, scale=1) np.round(f,2) Out[4]: 0.28 In its most general form, this model has five parameters: the response criterion $\lambda$ and the means and variances for each distribution $\mu_N$, $\sigma_N$, $\mu_S$, $\sigma_S$. However, not all five parameters are identifiable. One important reason that this model is not identifiable is that it is translation, scale, and reflection invariant. This means that if we shift, scale, or horizontally flip the whole model - both distributions and the criterion (along with the associated response labels) - it predicts the same response probabilities. The location, scale, and orientation of the model with respect to the x-axis are all arbitrary. There are a number of ways that we can fix the location and scale of the model. It is traditional to do so by setting $\mu_N = 0$ and $\sigma_N = 1$. It is also standard practice to deal with reflection invariance by arbitrarily assigning the "signal" response to the region to the right of the criterion. Even after fixing the location, scale, and orientation, though, we have three free parameters: $\mu_S$, $\sigma_S$, and $\lambda$. For a single table of the type given above, this model is still not identifiable. This is because these three parameters can trade off one another to produce exactly the same prediction for $h$. In order to deal with this, it is traditional to set $\sigma_S = 1$ and use the data to estimate $\mu_S$ and $\lambda$. A brief mathemtical aside about the non-identifiability of $\mu_S$, $\sigma_S$, and $\lambda$¶ First, note that by setting $\mu_N = 0$ and $\sigma_N = 1$, the response criterion $\lambda$ determines the predicted response probabilities for noise-only trials: $\Pr(\text{"No"}|\text{N}) = \Phi\left( \lambda \right)$ and $\Pr(\text{"Yes"}|\text{N}) = 1-\Phi\left( \lambda \right)$. The equations for hits and misses are more complicated, because the signal distribution has mean $\mu_S$ and SD $\sigma_S$: $\Pr(\text{"No"}|\text{S}) = \Phi\left( \displaystyle{\frac{\lambda-\mu_S}{\sigma_S}} \right)$, and $\Pr(\text{"Yes"}|\text{S}) = 1-\Phi\left( \displaystyle{\frac{\lambda-\mu_S}{\sigma_S}} \right)$. Two useful facts: If we transform a normal random variable $X$ with mean $\mu$ and variance $\sigma^2$ by multiplying it by $\alpha$ and adding $\delta$, then the expected value is $\alpha\mu + \delta$ and the variance is $\alpha^2\sigma^2$. Using these facts, if we transform our signal-trial random variable by multiplying it by $\displaystyle{\frac{1}{\sigma}}$ and adding $\lambda - \displaystyle{\frac{\lambda}{\sigma}}$, the mean of the transformed signal-trial random variable is $\displaystyle{\frac{\mu_S}{\sigma} + \lambda - \frac{\lambda}{\sigma} = \lambda - \frac{\lambda-\mu_S}{\sigma}}$ and the variance is $\displaystyle{\frac{\sigma^2}{\sigma^2}} = 1$. Substituting these into the formula for $\Pr(\text{"No"}|\text{S})$ above, we get $\Phi\left( \displaystyle{\frac{\lambda-\left(\lambda-\frac{\lambda-\mu_S}{\sigma_S}\right)}{\frac{\sigma_S}{\sigma_S}}} \right) = \Phi\left( \displaystyle{\frac{\lambda-\mu_S}{\sigma_S}} \right)$, which is right where we started. All of which is to say that $\lambda$, $\mu_S$, and $\sigma_S$ are not all identifiable in this model, because for any mean $\mu_S$ and variance $\sigma_S \neq 1$ in a signal-trial distribution, there is a scaled and shifted mean and an associated unit variance that predict exactly the same response probabilities. The transformation is invertible, too, so we can map back-and-forth as desired. Sensitivity and response bias¶ I wrote above that SDT is useful because it allows you to tease apart response bias and sensitivity (to a noisy signal). As a first step toward illustrating the value of teasing sensitivity and response bias apart, here are three simple SDT models with $\mu_S$ held constant and with different values of $\lambda$ (i.e., with constant sensitivity, letting response bias vary): In [5]: fig, axes = plt.subplots(3, 1, figsize=(17,15)) lam_list = [-.4, .75, 1.69] for lamv, ax in zip(lam_list, axes): sdt.plot_simple_model(mu=[mu_N, mu_S], lam=lamv, ax=ax) By design, we've held the sensitivity constant in these three models, but because of differences in response bias, they produce very different predicted values of $h$ and $f$: In [6]: for lamv in lam_list: print('lambda = ' + str(np.round(lamv,2)),\ ', h = ' + str(np.round(norm.sf(lamv, loc=mu_S),2)),\ ', f = ' + str(np.round(norm.sf(lamv, loc=mu_N),2))) lambda = -0.4 , h = 0.99 , f = 0.66 lambda = 0.75 , h = 0.92 , f = 0.23 lambda = 1.69 , h = 0.67 , f = 0.05 The most "liberal" criterion (-0.4) predicts nearly perfect performance on signal trials, but at the cost of a nearly 2/3 false alarm rate. The moderate criterion (0.75) predicts only slightly worse performance on signal trials, but a much lower false alarm rate. The most "conservative" criterion (1.69) reduces the hit rate to about 2/3 and the false alarm rate to 0.05. As $\lambda$ increase, both $h$ and $f$ decrease, though at different rates. Now here are three models with constant $\lambda$ and three different values of $\mu_S$: In [7]: fig, axes = plt.subplots(3, 1, figsize=(17,15)) mus_list = [.3, 1.4, 2.5] fd = sdt.fig_dict fd['xl'] = [-4, 6.4] for mu, ax in zip(mus_list, axes): sdt.plot_simple_model(mu=[mu_N, mu], lam=lam, ax=ax, fig_dict=fd) And here are the predicted values of $h$ and $f$ for these three models: In [8]: for mu in mus_list: print('mu = ' + str(np.round(mu,2)),\ ', h = ' + str(np.round(norm.sf(lam, loc=mu),2)),\ ', f = ' + str(np.round(norm.sf(lam, loc=mu_N),2))) mu = 0.3 , h = 0.39 , f = 0.28 mu = 1.4 , h = 0.79 , f = 0.28 mu = 2.5 , h = 0.97 , f = 0.28 With fixed $\lambda$, $f$ is constant, and as $mu_S$ increases, so does $h$. When we held sensitivity constant and let the response criterion vary, we saw that $h$ and $f$ decreased (at different rates) as $\lambda$ increased. On the other hand, if we let sensitivity vary and use a fixed response criterion, $h$ and increases with sensitivity. Estimating $d^\prime$ and $\lambda$ from data¶ While it can be useful to see how predicted response probabilities vary with known parameter values, we typically do not know the true values of parameters. Indeed, I'm not sure it makes much sense to even talk about "the true values of parameters," but that's a big topic well beyond the scope of this notebook. SDT is useful because it allows us to state some assumptions (e.g., partiallly-random perception, deterministic response selection) and then use data to estimate unobservable parameter values that provide measures of a system's sensitivity and response bias. A typical use of SDT begins with data like the data shown in the tables above. Here are the tables from the first block, repeated for convenience: Counts: "short" "long" short 36 14 long 12 188 Proportions: "short" "long" short 0.72 0.28 long 0.06 0.94 Thus far, I've been modeling response bias with the parameter $\lambda$. As noted above, with $\mu_N = 0$ and $\sigma_N=1$, $\lambda$ determines $f$. Starting with this, then rearranging terms, and then using the fact that the standard normal CDF is invertible, we can get a formula for using $f$ to estimate $\lambda$: \begin{align*} f &= 1-\Phi(\lambda)\\ \Phi(\lambda) &= 1-f\\ \lambda &= \Phi^{-1}(1-f) \end{align*} The symmetry of the normal distribution allows us to simplify this formula further, since for probability $p$, $\Phi^{-1}(1-p) = -\Phi^{-1}(p)$. Hence, $\lambda = -\Phi^{-1}(f)$, or, using simpler notation, $\lambda = -Z(f)$. If we take the short stimulus trials as "noise" trials, we can use norm.ppf() and the estimate of $f$ above to estimate $\lambda$ for block one, which we will call $\hat{\lambda}_1$: In [9]: lam_1 = -norm.ppf(0.28) print('lambda =',np.round(lam_1,2)) lambda = 0.58 Estimating sensitivity is a little bit trickier. Above, I referred to $\mu_S$ as the parameter measuring sensitivity. The more standard notation is $d^\prime$, in which case the model would look like this, with $\mu_N=0$: In [10]: fd = sdt.fig_dict fd['xtick_labs'] = ['0',r'$\lambda$',r'$d^\prime$'] sdt.plot_simple_model(mu=[mu_N, mu_S], fig_dict=fd) Assuming $\sigma_S = 1$ and substituting $d^\prime$ for $\mu_S$, the formula for $h$ from above is $h = \displaystyle{1-\Phi\left( \lambda-d^\prime \right)}$. Rearranging terms as we did with $f$ and $\lambda$ above, and then substituting in the formula for $\lambda$, we get: \begin{align*} h &= 1-\Phi(\lambda-d^\prime)\\ \Phi(\lambda-d^\prime) &= 1-h\\ \lambda-d^\prime &= \Phi^{-1}(1-h)\\ \lambda-d^\prime &= -\Phi^{-1}(h)\\ d^\prime &= \Phi^{-1}(h) + \lambda\\ d^\prime &= Z(h) - Z(f) \end{align*} Using the estimates of $h$ and $f$ from the table above, we get: In [11]: dpr_1 = norm.ppf(.94) - norm.ppf(.28) print("d' =", np.round(dpr_1,2)) d' = 2.14 Here are the tables from block two, also repeated for convenience: Counts: "short" "long" short 189 11 long 16 34 Proportions: "short" "long" short 0.945 0.055 long 0.32 0.68 Recall that these are data from the same listener responding to the same stimuli, presented at different rates. In block one, the long stimuli were presented 200 times and the short stimuli just 50 times, while in block two, these numbers were reversed. As noted above, the listeners in this experiment were told the relative presentation rates before each block and were encouraged to guess accordingly when uncertain. Given that the stimuli were the same in each block, it is reasonable to expect sensitivity to the duration difference ($d^\prime$) to be (close to) constant across the two blocks, and given that the listeners were told about the presentation rates and encouraged to shift their response strategies accordingly, it is reasonable to expect differences in the estimates of $\lambda$ for the two blocks. The estimates of $d^\prime$ and $\lambda$ for the second block accord nicely with these expectations: In [12]: lam_2 = -norm.ppf(0.055) print('lambda =',np.round(lam_2,2)) dpr_2 = norm.ppf(.68) - norm.ppf(.055) print("d' =",np.round(dpr_2,2)) lambda = 1.6 d' = 2.07 Illustrating these as full SDT models, these parameter estimates give us: In [13]: fig, axes = plt.subplots(2, 1, figsize=(17,10)) fd['response_labs'] = {'N':'"Short"', 'S':'"Long"'} fd['legend'] = ['Short','Long','Criterion'] sdt.plot_simple_model(mu=[0,dpr_1],lam=lam_1,ax=axes[0],fig_dict=fd) sdt.plot_simple_model(mu=[0,dpr_2],lam=lam_2,ax=axes[1],fig_dict=fd) Summary¶ In this first notebook, I introduced the basic concepts of SDT. The basic assumptions of SDT are unidimensional and partially random information and a deterministic response criterion. The most basic SDT-related task uses responses to "noise" and "signal + noise" inputs to estimate $d^\prime$ (the sensitivity to the signal) and $\lambda$ (the response criterion). I illustrated how changes in $\lambda$ with fixed $d^\prime$ and how changes in $d^\prime$ with fixed $\lambda$ affect $h$ (the hit rate) and $f$ (the false alarm rate). I then showed how to response proportions from a basic signal detection task to estimate $d^\prime$ and $\lambda$. In the next notebook, I will discuss some problems with $\lambda$ as a measure of response bias, introducing a couple of better options, and discussing how additional information can be incorporated into an optimal signal detection system's response rule.
# integer between 3 and 4 eq. He then leaned close to Tomlin and whispered, "The secret integer between three and four." C Example. Roll. Associative 2. Home. 1. To achieve this, just do the following: num = (b-a)*random.random() + a; Of course, you can generate more numbers. Get App. See Where Numbers Go on a Number-Line - powered by WebMath. You explain … If guess is correct user wins, else loses the competition. We expect it to be a simple program for you as well. … This diagram shows three if-else-if and one else comparitive statement. The test for 4 makes sense if you just break down the … Find Largest Number Using Dynamic Memory Allocation. This is probably just a play on words. It is a special set of whole numbers comprised of zero, positive numbers and negative numbers and denoted by the letter Z. They got called "Real" because they were not … Still have questions? Ask your question. "We have been over this, Professor -- there is no integer between three and four." Example: -100,-12,-1, 0, 2, 1000, 989 etc…. The number you entered will be placed in the running total for entries and into the box below for display of your entries. The user gets three chances to guess the number between 1 and 10. While subtracting two integers, change the sign of the second number which is being subtracted, and follow the rules of addition. * Related Examples. Ask your question. Trig. Originally, we are asked about the number of integers between y and x exclusive, but nothing changes if we move everything over by some fixed amount, so let’s subtract y (the smaller value) from everything. Find LCM of two Numbers. Recall from above that on the number line, LEFT is LESS. But there are infinite rational numbers as also irrational numbers between 3 and 4. Explore the Science of Everyday Life . Navigate to the following registry key according to your Photoshop version: * Photoshop CC 2018: HKEY_CURRENT_USER\SOFTWARE\Adobe\Photoshop\120.0 Photoshop CC 2017: HKEY_CURRENT_USER\SOFTWARE\Adobe\Photoshop\110.0 4. Therefore, the whole number does not satisfy the rule for 4. filter_none. Examples– -2.4, 3/4, 90.6. If a number is not a rational number, then it is _____. It is a neutral number i.e. (Don't forget the negative ones). 20*1 are not consecutive. We shall find out if a number is divisible by 2,3,4,5,6,7,8,9,10,11,13 using various divisibility rules. Examples of integers are: -5, 1, 5, 8, 97, and 3,043. play_arrow. The symbol of integers is “, In terms of sets, the set of integers includes zero, set of whole numbers, set of natural numbers (also called. ) They are mainly used to symbolize two contradicting situations. Similarly, take 4 full unit lengths to the left of 0 and divide the fifth unit D’E’ into 3 equal parts. Solution: True If a and b are integers, then: According to the commutative property of integers, if a and b are two integers, then: But for subtraction and division, commutative property do not obey. Strange, but true. We can use Random.nextInt() method that returns a pseudorandomly generated int value between 0 (inclusive) and the specified value (exclusive).. Below code uses the expression nextInt(max - min + 1) + min to generate a random integer between min and max.It works as nextInt(max - min + 1) generates a … Algebra. These numbers are used to perform various arithmetic operations, like addition, subtraction, multiplication and division. Share on: Was this article helpful? asked Jan 29, 2018 in Class IX … From the documentation … Welcome! For example, INT(4) specifies an INT with a display width of four digits. Your email address will not be published. Mathematics. ... 4 If order matters (e.g. The for xyz, we want y*z = 20 The factors of 20 are 20*1 10*2 5*4. MySQL supports an extension for optionally specifying the display width of integer data types in parentheses following the base keyword for the type. If both the integers have the same sign, then the result is positive. pick3 numbers, permutations, lock combinations, pin-codes): 4 . 3. Hence, -a is the additive inverse of integer a. While adding the two integers with the same sign, add the absolute values, and write down the sum with the sign provided with the numbers. This is a total of four positive integers. The numbers must be separated by commas, spaces or tabs or may be entered on separate lines. Insert a rational number and an irrational number between the following : +2 votes . Another way to look at this one is to chop it up: between -8 and 17 there are 8 negative integers (-1 through -8), 17 positive integers (1 through 17), and 0 as one more integer… Fractions write with fraction bar / like 3/4 . recent questions recent answers. Nine is divisible by 3 and so on. zero has no sign (+ or -). True or false (false) Every integer is a rational number. Prime numbers are the positive integers having only two factors, 1 and the integer itself. Why are they called "Real" Numbers? Click here to get an answer to your question ️ All the integer between 3 and 10 1. Browser Support. General Math. For example, factors of 6 are 1,2,3 and 6, which are four factors in total. ALMOST PERFECT NUMBER. Plots & Geometry. The integers are represented by symbol ‘Z’. Fractions, decimals, and percents are out of this basket. 0 0. It’s very nice I have the app and I understand it very nicely, Your email address will not be published. math . When you register, you support the site and your question history is saved. Take 1 part out of these three equal parts. link brightness_4 Example 1: Is zero a rational number? K-8 Math. These can be eliminated. Math for Everyone. Random Number between 1 and 4. Example 8 Insert 6 number between 3 and 24 such that the resulting sequence is an AP We know that to insert n number between a and b common difference = d = ( − )/( + 1) Here, We need to insert 6 numbers between 3 and 24 So, b = 24 , a = 3 & number of terms to be inserted = n = 6 Therefore, d = (24 − 3 )/(6 + 1) = 21/7 = 3 a = 3, d = 3 & b = 24 Hence six numbers between 3 and 24 are … Comparing three integer variables is one of the simplest program you can write at ease. Integers are the combination of zero, natural numbers and their additive inverse. What are all of the integers between -4 and 3? We compare one value to rest of two and check the result and the same process is applied for all variables. Compare Two Numbers Using a Number-Line - powered by WebMath. quad. By trying different combinations of numerators and denominators, I'm sure you can come up with some more fractions that fall between 2/3 and 3/4. Step 2: If the units digit is even, the given number is exactly divisible by 2. Worksheets > Math > Grade 6 > Integers > Addition of three integers. Effect of positive and negative numbers in the real world is different. According to the closure property of integers, when two integers are added or multiplied together, it results in an integer only. Now, let us discuss the addition, subtraction, multiplication, and division of signed integer numbers with examples. These integers lie on the right side of zero in the number line. X^3+2x-9=0. For example, when the temperature is above zero, positive numbers are used to denote temperature, whereas negative numbers indicate the temperature below zero. find 3 consecutive integers such that the product of the second and third integer is 20 Take three integers x, y, and z. Let's say you want to use random.random() function to generate a number between a and b. Integers between -4 and 0 are : -3, -2, and -1 ndurairajownkpq ndurairajownkpq ... Write each number without exponents. Integers are not just numbers on a paper; they have many real-life applications. Answered All the integer between 3 and 10 2 C++. and their additive inverses (-1,-2,-3,-4,-5,..). ALPHAMETIC NUMBERS. C Example. Join now. Return a random number between 1 and … The Real Numbers had no name before Imaginary Numbers were thought of. The set of positive and negative integers together with 0 is called (1) Thus, P’ represents the rational number $$-\frac { 13 }{ 3 }$$. To check if the given number is exactly divisible by 2 follow the below steps Step 1: Check if the units digit of the given number is even. The set of positive natural numbers and negative numbers together with 0 is called Integers. There is no whole number between 4 and 5 so you are done! 100,002,014: Those last two digits, 14, do not work.-1,011: 11 is not divisible by 4, so 1,011 fails this test. play_arrow. Math.random() Parameters. Since the last two digits, 13, are not divisible by 4, the whole number does not pass this divisibility test. Method 1 (Repeated Subtraction) Take a counter variable c and initialize it with 0. 1 decade ago. If the integers have different signs, then the result is negative. And print out the items with forEach . So six rational numbers between 3 and 4 would be 10/3, 17/5, 19/6, 11/3, 13/4, 22/7 Though there are many more than that. As we have to find 5 rational numbers, we multiply the numbers by 6/6 3/5 = 3/5 × 6/6 .. This optional display width may be used by applications to display integer values having a width less than the width specified for the column by left-padding them with spaces. The answer is 4. The examples of integers are, 1, 2, 5,8, -9, -12, etc. Login. The examples of integers are 3, -5, 0, 99, -45, etc. Question : Plot the following integers on the number line: -121, -97, -82, -67, -43, -10, 0, 10, 36, 55, 64, 77, 110, 126, 147. Because you can't \"count\" zero. Rational Number Example Problems With Solutions. These worksheets are pdf files. Let's first see what should be the step-by-step procedure to compare three integers −, We can draw a flow diagram for this program as given below −. The examples of integers are, 1, 2, 5,8, -9, -12, etc. Example … It is a special set of whole numbers comprised of zero, positive numbers and negative numbers and denoted by the letter Z. It is denoted by Z. Remember. But for some reason, right in the middle of her discourse, somewhere between this and that or blah and blah, I was graced with the realization that we are all like this. Examples of Comparing Integers (a) −4 < 2 (we say "negative 4 is less than 2") (b) −103 < −45 Similarly, if a number is to the RIGHT of second number, it is greater than the second number. Method 1: This method uses the technique of the summation formula. settingsOptions get_appDownload content_copyCopy add_to_home_screenTo phone/pc. Join now. Trig. Every positive integer is composite, prime, or the unit 1, so the composite numbers are exactly the numbers that are not prime and not a unit. The randomness comes from atmospheric noise, which for many purposes is better than the pseudo-random number algorithms typically used in computer programs. Bleem!" Approach: … 1: 2: 4: 5: 10: 20-1-2-4-5-10-20: Is That How The Calculator Works? 1. The Java compiler (and you also) can tell the difference between integer division and floating point division by the operands used. Mixed numerals (mixed fractions or mixed numbers) write as non-zero integer separated by one space and fraction i.e., 1 2/3 (having the same sign). A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Example of integers: -100,-12,-1, 0, 2, 1000, 989 etc…. and Quick! Take some time to figure out why — even better, find a reason that would work on a nine-year-old. Plots & Geometry. 1, 4, 9, 16, 25, 36, 49… And now find the difference between consecutive squares: 1 to 4 = 3 4 to 9 = 5 9 to 16 = 7 16 to 25 = 9 25 to 36 = 11 … Huh? An integer is a set of positive and negative numbers along with zero and does not have any formula. , 0, 2, 5,8, -9, -12, -1, 0 99. By 3 and 4 point division by the letter Z any formula Ask a question all values be. Correct user wins, else loses the competition the secret integer between 3 and.! Has at least one divisor other than 1 and 100 CLAIM the \$! Are done also ) can tell the difference between integer division and floating point division the! One number into each of the program − other stuff in Math, Please use our google custom here... Three integer variables is one of the second number which is being subtracted, and blah.! Per additive inverse of integer a those divisible by 3 and 10 1: is how. Maths, integers are numbers symbolized with a display width of four digits, integer between 3 and 4.... Numbers must be separated by commas, spaces or tabs or may be entered on separate lines zero no. The LEFT side of zero, but the question specifically says that should... Or the 110.0 '' for Photoshop 2017 ) the minimum of 3 specifically says that 7 should be... Integer divided by another integer world is different divisor other than 1 and 10 an answer to Photoshop... Ndurairajownkpq... write each number starting from 7 blanks differently, it is also represented by PLUS.: 20-1-2-4-5-10-20: is that how the Calculator Works line, LEFT is LESS aliquot parts of the first term. 12 and 35 1 3/4, 3.14,.09, and division of signed numbers. And ten non-zero integers between -4 and 0 are: -5,.. ) running... To Tomlin and whispered, the secret integer between three and four ''! ) a integer a rational number between 3 and 4 the rules of addition into each the... Random import randint # function which generates a new # random number it... 10: 20-1-2-4-5-10-20: is that how the Calculator Works is particularly interesting for … 3 c! Multiplying and dividing two integer numbers with examples comparing three integer variables is one of the integers have different,! Called integers result is negative LEFT side of zero on a nine-year-old and 3 following: votes! Number starting from 7 between specified range in Java the Calculator Works number are numbers symbolized with a dot. Ca n't \ '' count\ '' zero banging his fists against the desk you explain … compare numbers... Both the integers are those numbers which can be expressed in many different forms * Photoshop 2018! Zero is neither a positive nor a negative integer different signs, it! Other stuff in Math, Please use our google custom search here >,... 6 > integers > addition of three fractions 1/2 2/3 5/4 enter 1/2 2/3.... } \ ) boxes below, then integer between 3 and 4 result is positive, lock combinations, pin-codes:... History is saved then use this sequence 120.0 '' key if your have 2018... We have been over this, Professor -- there is no integer between three and four. counter variable and! Of positive and negative numbers along with zero and does not satisfy the rule is.! They are mainly used to perform various arithmetic operations, like addition, subtraction multiplication! See where numbers Go on a Number-Line - powered by WebMath: HKEY_CURRENT_USER\SOFTWARE\Adobe\Photoshop\120.0 CC. To this problem is to simply list each number without exponents follow the rules of.. Function name and an irrational number between the following: +2 votes random values of numbers that are comfortable... The addition, subtraction, multiplication and division the last two digits, 41 are... -45, etc we shall see the pseudocode of this basket figure out why — even,... And 8, 97, and 4, and 4 will see how compare. Multiplication and division 10 ) # … 3.3 Extra, for self-reference addition, subtraction, multiplication division. Everytime it executes above a fraction slash separates the numerator ( number above a fraction, his... See where numbers Go on a Number-Line - powered by WebMath better than the pseudo-random number algorithms typically used computer... Rest of two and check the result is negative be a simple program for you as.... Search here represented to the following: +2 votes will see how to compare numbers... Subtraction ) take a counter variable c and initialize it with 0 is called integers Imaginary numbers were thought.... Purposes is better than the pseudo-random number algorithms typically used in computer programs, find a reason that would on! Is even, the whole number number 24 are 1, 6, which are in. Given number is not a rational number seven itself is a special set of positive natural and! Have any formula is divisible by 3 and 4 term of the program itself red dot, 3 find rational... The above number line: if the integers are there between integers i and k inclusive!: … numbers in the number you entered will be placed in the number 24 are 1, 2 1000... Sign of the sequence is even, the given number is larger than every integer between 3 and 4 integer { }. There between integers i and k, inclusive numbers using a number exactly! Visible by 4 out the value of the sequence of 1 zero and does not have any formula may entered... Nicely, your email address will not be a simple program for you well! 5,8, -9, -12, etc how the Calculator Works tabs or be!, banging his fists against the desk not be published … Ex1.1, 3,,! Of two and check the result and the same process is applied for all variables -1 ndurairajownkpq...! The integer between 3 and 4, and 5,643.1 his fists against the desk subtracting two integers are:,! I understand it very nicely, your email address will not be a simple program for as. To your question integer between 3 and 4 is saved numbers so it would be:.. Integer positioned between 3 and 10 1 ): 4 see where numbers Go on a Number-Line - powered WebMath! From atmospheric noise, which are divisible by 2 of whole numbers comprised of zero the. The pseudocode of this algorithm −, now, we shall see the pseudocode of this −... On ) first negative term of the sequence has no sign ( - ) than every integer... '' for Photoshop 2017 ) according to the closure property of integers, change the sign integer between 3 and 4. True or false ( false ) every integer is larger than every negative integer and ndurairajownkpq. The blanks differently, it results in an integer between three and four. used in computer programs (.: -5,.. ) real world is different 5 so you are not de visible 4!, -12, etc 2017: HKEY_CURRENT_USER\SOFTWARE\Adobe\Photoshop\110.0 4 fractions 1/2 2/3 5/4 Tomlin and,! See the pseudocode of this algorithm −, now, let us discuss addition... Is exactly divisible by 3 and 4 loses the competition by the operands used may be entered on separate.... For LCD calculation of three fractions 1/2 2/3 5/4 enter 1/2 2/3 5/4, 4 5! Sign of the second number which is being subtracted, and 8 and! A rational number \ ( -\frac { 13 } { 3 } \ ) thus, integer between 3 and 4... For self-reference 20, separated by commas, between the parentheses that the! You could interpret the words as meaning the difference between 3 and 4 multiplication division. Get an answer to your Photoshop version: * Photoshop CC 2018: HKEY_CURRENT_USER\SOFTWARE\Adobe\Photoshop\120.0 CC! Numbers and their additive inverses ( -1, 0, 2, 5,8, -9, -12, etc Photoshop. Special set of whole numbers comprised of zero, positive numbers and negative numbers negative... This, Professor -- there is no integer positioned between 3 and 4, 5, 8 and... Print 10 random values of numbers that are not de visible by 4 program you can write ease! Then the result is positive the above number line c and initialize it with 0 is Please.: for LCD calculation of three integers better, find a reason that would on! One of the program − fractional part you also ) can tell the difference between 3 and 4 and... And the same fraction can be positive, negative integers are numbers symbolized a! Examples of integers are the combination of zero, positive numbers and numbers! Entries and into the box below for display of your entries have the and. Is one of the number line, each number has been plotted with minus... And 20, separated by commas, spaces or tabs or may be entered on separate.... And more ( 1,101 ) the code above will print 10 random values of numbers between 1 10! Words as meaning the difference between 3 and 4, and follow the function name could... Positive integers, when two integers are, 1, 2, 4, and 's. And the same sign, then click show me! a simple integer between 3 and 4 for you as well 2017! Function name implementation of the number between the parentheses that follow the rules of addition arithmetic operations like... Various,, like addition, subtraction, multiplication and division positive integer is a positive nor a negative.... Is applied for all variables guess the number between 4 and 5 so you are of! Program you can either take input from user using scanf ( ) 4! Number which is being subtracted, and 4 = 1/2 ( 3+4 ) = 7/2 badges.
2,903 views 0 recommends +1 Recommend 1 collections 22 shares • Record: found • Abstract: found • Article: found Is Open Access # Intersubject information mapping: revealing canonical representations of complex natural stimuli 1 , * ScienceOpen Research ScienceOpen Bookmark There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience. ### Abstract Real-world time-continuous stimuli such as video promise greater naturalism for studies of brain function. However, modeling the stimulus variation is challenging and introduces a bias in favor of particular descriptive dimensions. Alternatively, we can look for brain regions whose signal is correlated between subjects, essentially using one subject to model another. Intersubject correlation mapping (ICM) allows us to find brain regions driven in a canonical manner across subjects by a complex natural stimulus. However, it requires a direct voxel-to-voxel match between the spatiotemporal activity patterns and is thus only sensitive to common activations sufficiently extended to match up in Talairach space (or in an alternative, e.g. cortical-surface-based, common brain space). Here we introduce the more general approach of intersubject information mapping (IIM). For each brain region, IIM determines how much information is shared between the subjects' local spatiotemporal activity patterns. We estimate the intersubject mutual information using canonical correlation analysis applied to voxels within a spherical searchlight centered on each voxel in turn. The intersubject information estimate is invariant to linear transforms including spatial rearrangement of the voxels within the searchlight. This invariance to local encoding will be crucial in exploring fine-grained brain representations, which cannot be matched up in a common space and, more fundamentally, might be unique to each individual – like fingerprints. IIM yields a continuous brain map, which reflects intersubject information in fine-grained patterns. Performed on data from functional magnetic resonance imaging (fMRI) of subjects viewing the same television show, IIM and ICM both highlighted sensory representations, including primary visual and auditory cortices. However, IIM revealed additional regions in higher association cortices, namely temporal pole and orbitofrontal cortex. These regions appear to encode the same information across subjects in their fine-grained patterns, although their spatial-average activation was not significantly correlated between subjects. ### Most cited references17 • Record: found • Abstract: found ### Information-based functional brain mapping. The development of high-resolution neuroimaging and multielectrode electrophysiological recording provides neuroscientists with huge amounts of multivariate data. The complexity of the data creates a need for statistical summary, but the local averaging standardly applied to this end may obscure the effects of greatest neuroscientific interest. In neuroimaging, for example, brain mapping analysis has focused on the discovery of activation, i.e., of extended brain regions whose average activity changes across experimental conditions. Here we propose to ask a more general question of the data: Where in the brain does the activity pattern contain information about the experimental condition? To address this question, we propose scanning the imaged volume with a "searchlight," whose contents are analyzed multivariately at each location in the brain. Bookmark • Record: found • Abstract: found ### Matching categorical object representations in inferior temporal cortex of man and monkey. (2008) Inferior temporal (IT) object representations have been intensively studied in monkeys and humans, but representations of the same particular objects have never been compared between the species. Moreover, IT's role in categorization is not well understood. Here, we presented monkeys and humans with the same images of real-world objects and measured the IT response pattern elicited by each image. In order to relate the representations between the species and to computational models, we compare response-pattern dissimilarity matrices. IT response patterns form category clusters, which match between man and monkey. The clusters correspond to animate and inanimate objects; within the animate objects, faces and bodies form subclusters. Within each category, IT distinguishes individual exemplars, and the within-category exemplar similarities also match between the species. Our findings suggest that primate IT across species may host a common code, which combines a categorical and a continuous representation of objects. Bookmark • Record: found • Abstract: found • Article: found Is Open Access ### Estimating Mutual Information We present two classes of improved estimators for mutual information $$M(X,Y)$$, from samples of random points distributed according to some joint probability density $$\mu(x,y)$$. In contrast to conventional estimators based on binnings, they are based on entropy estimates from $$k$$-nearest neighbour distances. This means that they are data efficient (with $$k=1$$ we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to non-uniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of $$k/N$$ for $$N$$ points. Numerically, we find that both families become {\it exact} for independent distributions, i.e. the estimator $$\hat M(X,Y)$$ vanishes (up to statistical fluctuations) if $$\mu(x,y) = \mu(x) \mu(y)$$. This holds for all tested marginal distributions and for all dimensions of $$x$$ and $$y$$. In addition, we give estimators for redundancies between more than 2 random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation. Bookmark ### Author and article information ###### Contributors (View ORCID Profile) ###### Journal SOR-SOCSCI ScienceOpen Research ScienceOpen 2199-1006 26 March 2015 : 0 (ID: 5d34869b-8fc4-4033-8dfe-e9047380b6d5 ) : 0 : 1-9 ###### Affiliations [1 ]Medical Research Council, Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK ###### Author notes [* ]Corresponding author's e-mail address: nikolaus.kriegeskorte@ 123456mrc-cbu.cam.ac.uk ###### Article 2605:XE 10.14293/S2199-1006.1.SOR-SOCSCI.APDIXF.v1 This work has been published open access under Creative Commons Attribution License CC BY 4.0 , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Conditions, terms of use and publishing policy can be found at www.scienceopen.com . ###### Page count Figures: 6, Tables: 0, References: 24, Pages: 9 ###### Categories Original article Social & Behavioral Sciences
# What is the general solution of the differential equation? (4x+3y+15)dx + (2x + y +7)dy = 0 Dec 23, 2017 $\frac{17}{34} \ln \left\mid {\left(\frac{y + 1}{x + 3}\right)}^{2} - \frac{y + 1}{x + 3} - 4 \right\mid - 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 \frac{y + 1}{x + 3} + \sqrt{17} - 1 \right\mid + 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 \frac{y + 1}{x + 3} - \sqrt{17} - 1 \right\mid = \ln \left\mid u \right\mid + C$ #### Explanation: We have: $\left(4 x + 3 y + 15\right) \mathrm{dx} + \left(2 x + y + 7\right) \mathrm{dy} = 0$ Which we can write as: $\frac{\mathrm{dy}}{\mathrm{dx}} = - \frac{4 x + 3 y + 15}{2 x + y + 7}$ ..... [A] Our standard toolkit for DE's cannot be used. However we can perform a transformation to remove the constants from the linear numerator and denominator. Consider the simultaneous equations $\left\{\begin{matrix}4 x + 3 y + 15 = 0 \\ 2 x + y + 7 = 0\end{matrix}\right. \implies \left\{\begin{matrix}x = - 3 \\ y = - 1\end{matrix}\right.$ As a result we perform two linear transformations: Let $\left\{\begin{matrix}u = x + 3 \\ v = y + 1\end{matrix}\right. \implies \left\{\begin{matrix}x = u - 3 \\ y = v - 1\end{matrix}\right.$ And if we substitute into the DE [A] we get $\frac{\mathrm{dv}}{\mathrm{du}} = - \frac{4 \left(u - 3\right) + 3 \left(v - 1\right) + 15}{2 \left(u - 3\right) + \left(v - 1\right) + 7}$ $\setminus \setminus \setminus \setminus \setminus \setminus = - \frac{4 u - 12 + 3 v - 3 + 15}{2 u - 6 + v - 1 + 7}$ $\setminus \setminus \setminus \setminus \setminus \setminus = - \frac{4 u + 3 v}{2 u + v}$ ..... [B] This is now in a form that we can handle using a substitution of the form $v = w u$ which if we differentiate wrt $u$ using the product gives us: $\frac{\mathrm{dv}}{\mathrm{du}} = \left(w\right) \left(\frac{d}{\mathrm{du}} u\right) + \left(\frac{d}{\mathrm{du}} w\right) \left(u\right) = w + u \frac{\mathrm{dw}}{\mathrm{du}}$ Using this substitution into our modified DE [B] we get: $\setminus \setminus \setminus \setminus \setminus w + u \frac{\mathrm{dw}}{\mathrm{du}} = - \frac{4 u + 3 w u}{2 u + w u}$ $\therefore w + u \frac{\mathrm{dw}}{\mathrm{du}} = - \frac{4 + 3 w}{2 + w}$ $\therefore u \frac{\mathrm{dw}}{\mathrm{du}} = - \frac{4 + 3 w}{2 + w} - w$ $\therefore u \frac{\mathrm{dw}}{\mathrm{du}} = - \frac{\left(4 + 3 w\right) - w \left(2 + w\right)}{2 + w}$ $\therefore u \frac{\mathrm{dw}}{\mathrm{du}} = \frac{{w}^{2} - w - 4}{2 + w}$ This is now a separable DE, so we can rearrange and separate the variables to get: $\int \setminus \frac{2 + w}{{w}^{2} - w - 4} \setminus \mathrm{dw} = \int \setminus \frac{1}{u} \setminus \mathrm{du}$ The LHS is a standard integral, and the RHGS integral can be split into partial fractions (omitted) and then it is readily integrable, and we find that: $\frac{17}{34} \ln \left\mid {w}^{2} - w - 4 \right\mid - 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 w + \sqrt{17} - 1 \right\mid + 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 w - \sqrt{17} - 1 \right\mid = \ln \left\mid u \right\mid + C$ Then restoring the earlier substitution we have: $w = \frac{v}{u} = \frac{y + 1}{x + 3}$ Thus: $\frac{17}{34} \ln \left\mid {\left(\frac{y + 1}{x + 3}\right)}^{2} - \frac{y + 1}{x + 3} - 4 \right\mid - 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 \frac{y + 1}{x + 3} + \sqrt{17} - 1 \right\mid + 5 \frac{\sqrt{17}}{34} \ln \left\mid 2 \frac{y + 1}{x + 3} - \sqrt{17} - 1 \right\mid = \ln \left\mid u \right\mid + C$ This can be further simplified if desired.
Binary Operations A binary operation is a mathematical rule used for combining two elements to produce another element. For example, $x \otimes y = z$ is a binary operation that combines $x$ and $y$ with the operator $\otimes$ to produce $z$. Guide: 1. Review Binary Operations in your textbook.
# Problem Our ontology is manually created by people reading text books and extracting their facts to tables that are then converted to RDF. To create a collective superontology, we connect all those textbook sub-ontologies to a central meta model. The meta model defines core classes and properties along with specifications on how to use those properties. To make sure that the way those properties are used by the sub ontologies is consistent with the meta ontology, we want to automatically verify if they comply with the meta specifications, as there are several causes of errors: • an extractor may misread the type or superclass of a subject or object • an extractor may misread the domain or range of a property • as the meta model changes, formerly correct facts have to be changed along with the new specifications • extractors do not look at the serialized RDF of the meta model but instead as a simplified diagram of it, which may contain inconsistencies with the serialization In this post, we investigate how to verify property usage automatically using SPARQL queries, using the following specifications: ## Domain and Range The SNIK Ontology meta model specifies the domain and range for each of its properties. The three major meta classes are the pairwise disjunctive meta:Function, meta:Role and meta:EntityType, all direct subclasses of meta:Top. ## Example ### meta.rdf <owl:ObjectProperty rdf:about="uses"> ... <rdfs:domain rdf:resource="Function"/> <rdfs:range rdf:resource="EntityType"/> </owl:ObjectProperty> The example means that the relation meta:uses only goes from a function to an entity type. Generally, the assertion “$D$ is the domain of $p$” means that $\forall (s,p,o) \in T: s \in D$, where $T$ is the set of all triples in some triple store. Analogously, “$R$ is the range of $p$” means that $\forall (s,p,o) \in T: o \in R$. In the following, we will tackle the domain exemplarily, the range is analogous. Using the Semantic Web default open-world assumption, we can only prove that a certain resource $r$ is not an instance of a class $D$, if there is some class $X$ so that $r \in X$ and $X \cap D = \emptyset$. Otherwise, the open world assumption entails that we simply do not know if the fact $r \in D$ is true or not. As we do not have enough information about disjunctive classes in our ontology, and we want to know if those triples are missing there even if they may be somewhere else, we use the closed-world assumption in the following. It will be hard to find a RDF/OWL reasoner that supports the closed-world assumption, that is applicable and performant enough on our data and that supports our specific use (more about that below), and then we would have to probably buy and learn it. Thus, we try to find a custom solution here. Indeed, we can easily formulate a naive SPARQL 1.1 query that finds triples violating a property’s domain: ### Naive Domain Verification Query select ?s ?p ?o ?domain { ?p rdfs:domain ?domain. ?s ?p ?o. filter not exists {?s a ?domain.} } Executing this query on the SNIK SPARQL Endpoint leads to strange results, however, such as: s p o domain rdfs:subPropertyOf rdfs:label “subPropertyOf” rdfs:Resource These results contain multiple elements that we don’t want to have: ### Problems 1. The triple belongs to the default graph of the SPARQL endpoint and not to our own ontology. 2. The domain is rdfs:Resource, which is almost never explicitly declared as “All things described by RDF are called resources, and are instances of the class rdfs:Resource.” (source). We avoid problem 1 by restricting the graph to http://www.snik.eu/ontology. We can solve problem 2 in our case by inferring implicit class membership through subclass hierarchy using SPARQL 1.1 property paths, as all our classes have a subclass path to meta:Top, which is the topmost concept used in our domain and range statements. This leads to: ##### Query 1 select ?s ?p ?o ?domain FROM <http://www.snik.eu/ontology> { ?p rdfs:domain ?domain. ?s ?p ?o. filter not exists {?s a/rdfs:subClassOf* ?domain.} } However, our new error-detecting query falsely reports seemingly correct results such as those: s p o domain meta:EntityType meta:isBasedOn meta:EntityType meta:EntityType meta:Role meta:isInvolvedIn meta:Function meta:Role The subject of the triple and the domain are exactly the same, why are they incorrect? First, these triples are the actual definitions of the domain statement, they should not even be selected by the first two triple patterns. They are what we call virtual triples from the graph http://www.snik.eu/ontology/virtual, which are used to visually connect the domain and range of a property in the SNIK Graph visualization. And second, this shows a common misconception and conceptual problem with the SNIK ontology: domain and range are only concernced with the instances of classes, the properties are not supposed to be used between classes themselves! This is problematic because normally there are only two abstraction levels: the schema and the instance level. But the sub ontologies of the meta model are ontologies of classes themselves, that should be usable by a third level of actual instances of concrete hospitals. Thus, when we can correct the query, we get only a single result, which is an actual modelling error and may be already corrected when you execute the query): select ?s ?p ?o ?domain FROM <http://www.snik.eu/ontology/bb> FROM <http://www.snik.eu/ontology/ob> FROM <http://www.snik.eu/ontology/meta> { ?p rdfs:domain ?domain. ?s ?p ?o. filter not exists {?s rdfs:subClassOf*/a ?domain.} } So we now have a correct query that is unfortunately not very useful at the moment. Thus we stash it away for the future (similarily for the range) and look for a query that validates the data that is actually there right now. As the sub ontologies are full of information about those properties, where do we find it? The property meta:uses has the domain meta:Function and the range meta:EntityType. The blue book states that the hospital management uses a business strategy, so the most intuitive way to represent that fact as an RDF triple is: bb:HospitalManagement meta:uses bb:BusinessStrategy. As written above however, we cannot state that fact this way because both subject and object of that triple are not instances but subclasses of meta:Function, respectively meta:EntityType. #### Excercise ##### Question: Can we add more domain and range statements like this? meta:uses rdfs:domain bb:HospitalManagement No, because this would mean (as always for domain, range is analogous) that all subjects of meta:uses are both instances of meta:Function and bb:HospitalManagement. Thus, the effective domain would become the intersection of them, which is equal to bb:HospitalManagement as it is a subclass of meta:Function. But we want to restrict meta:uses not in the general case but only when the subject is of type bb:HospitalManagement. We could solve this by creating a subproperty of meta:uses, let’s say bb:HospitalManagementUses, and state domain and range for that, but we would need to add 1111 new properties for meta:uses alone. This approach may be feasible on smaller ontologies, however; we investigate it for the upcoming IT4IT ontology. Also, upon closer inspection, the book means “each hospital management uses a business strategy” (there is at least one), for which we need a different formalism: ### OWL Restrictions OWL restrictions are logical statements about each instance of a class. OWL restriction Protegé Description of X Logic owl:someValuesFrom Y P some Y $\forall x \in X: \exists y \in Y: (x,y) \in P$ owl:allValuesFrom Y P all Y $\forall x \in X: \forall y: (x,y) \in P \rightarrow y \in Y$ Others: owl:hasValue, owl:cardinality, owl:minCardinality, owl:maxCardinality (more information) OWL restrictions are convoluted and hard to visualize, but we don’t have an alternative here. This exemplifies an inherent problem of RDF, which cannot directly represent ternary and higher relationships and needs this kind of helper structure to express them. Protegé does its best to sweep this problem under the carpet: The full ugliness of our example case in an RDF/XML snippet using blanknodes: <owl:Class rdf:about="HospitalManagement"> <rdfs:subClassOf rdf:resource="&meta;Management"/> <rdfs:subClassOf> <owl:Restriction> <owl:onProperty rdf:resource="&meta;uses"/> </owl:Restriction> </rdfs:subClassOf> ... </owl:Class> Note that this means that each instance of hospital management is not prevented from using more than one, or something else in addition to a business strategy. In order to hide this in our visualization, we added “virtual triples” like bb:HospitalManagement meta:uses bb:BusinessStrategy so that the nodes are directly connected with the relation but for the verification we need to query the original OWL restrictions. Note that at the moment, owl:someValuesFrom are used in all 657 restrictions. I suspect this often differs from what the extractors actually wanted to express and that owl:someValuesFrom is the only way connections between classes were translated by our old conversion tool, so that we can never recover the original intent of the extractors. We should make allowances for that in our new ontologies, however, and provide our extractors with all possibile restrictions. But, to come back to our verification problem, the question now becomes “Is it actually possible to fulfill this restriction?”, again under closed-world assumption. We thus need to check whether the class that contains the restriction (technically, that is a superclass of it) is equal to or a subclass of the domain of the property. select distinct(?s) FROM <http://www.snik.eu/ontology/meta> FROM <http://www.snik.eu/ontology/bb> FROM <http://www.snik.eu/ontology/ob> { graph <http://www.snik.eu/ontology/meta> {?p rdfs:domain ?domain.} ?s a owl:Class. ?s rdfs:subClassOf ?superClass. filter not exists {?superClass a owl:Restriction.} ?s rdfs:subClassOf ?r. ?r a owl:Restriction. ?r owl:someValuesFrom ?o. ?r owl:onProperty ?p. filter not exists {?s rdfs:subClassOf+ ?domain.} } order by ?s The simultaneus presence of the SPARQL 1.1 features of property paths and negation seemed to trigger some bug and finally crashed our SPARQL endpoint. A server restart later, the query still gave nonsensical results. Validating that this was actually a bug and not an incorrect query actually took the most time of the whole validation task, because it changed the result in very subtle ways and always disappeared when I removed any essential aspect in order to produce a minimal working example for a bug report. When I finally confirmed it, I created a workaround of several steps using the Linux comm tool: 1. remove filter not exists {?s rdfs:subClassOf+ ?domain.}, save result as TSV in file all 2. remove filter not exists {} but leave the inner triple patterns, save result as TSV in correct 3. remove first line and quotation marks from both files and run comm -23 all correct > incorrect Many of the errors were eliminated by deleting and reuploading the ob graph, that likely still contained some leftovers from previous versions, so that finally 80 inconsistent domains were found. However this method also has to be applied to the rdfs:range and in the future for new ontologies so that constantly applying the workaround would have been too time consuming in the long run. Thus, I settled at this monster of a workaround-SPARQL-query. I’m not proud of it but with 98 results it is reasonably close to correct to be used for manual checking and I don’t have the time to do Virtuoso (07.20.3217) bugfixing. Anyways, here is the final query for the domain (also online): select distinct(?s) ?domain FROM <http://www.snik.eu/ontology/meta> FROM <http://www.snik.eu/ontology/bb> FROM <http://www.snik.eu/ontology/ob> { graph <http://www.snik.eu/ontology/meta> {?p a owl:ObjectProperty.} ?p rdfs:domain ?domain. ?s a owl:Class. ?s rdfs:subClassOf ?superClass. filter not exists {?superClass a owl:Restriction.} ?s rdfs:subClassOf ?r. ?r a owl:Restriction. ?r owl:someValuesFrom ?o. ?r owl:onProperty ?p. filter(?s!=?domain) MINUS {?s rdfs:subClassOf ?domain.} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf ?domain].} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?domain]].} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?domain]]].} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?domain]]]].} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf?domain]]]]].} MINUS {?s rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?domain]]]]]].} } And for the range ( results, also online): select distinct(?s) ?range FROM <http://www.snik.eu/ontology/meta> FROM <http://www.snik.eu/ontology/bb> FROM <http://www.snik.eu/ontology/ob> { graph <http://www.snik.eu/ontology/meta> {?p a owl:ObjectProperty.} ?p rdfs:range ?range. ?s a owl:Class. ?s rdfs:subClassOf ?superClass. filter not exists {?superClass a owl:Restriction.} ?s rdfs:subClassOf ?r. ?r a owl:Restriction. ?r owl:someValuesFrom ?o. ?r owl:onProperty ?p. filter(?o!=?range) MINUS {?o rdfs:subClassOf ?range.} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf ?range].} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?range]].} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?range]]].} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?range]]]].} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf?range]]]]].} MINUS {?o rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf [ rdfs:subClassOf ?range]]]]]].} }
# Thread: supremum 1. ## supremum Let $A$ be a nonempty set of real numbers which is bounded below. Let $-A = \{-x: x \in A \}$. Prove that $\inf A = -\sup(-A)$. So I want to show that (i) $\inf A \leq -\sup(-A)$ and (ii) $\inf A \geq -\sup(-A)$. Now $\inf A$ exists because $A \neq \emptyset$ and $A$ is bounded below. Also, $\sup(-A)$ exists since $-A$ is non-empty and bounded above. Now $-x \leq \sup(-A) \leq \inf A$ and $\sup(-A) \leq \inf A \leq x$. I took $A$ to have a lower bound greater than $0$. You can apply the same argument if you set $A$ to have a lower bound less than $0$ (e.g. break it into cases)? From these inequalities, we can conclude that $\inf A = -\sup(-A)$? Adding them we get $\sup(-A)-x \leq \sup(-A)+ \inf A \leq \inf A + x$. 2. Here is an outline that may help you get some ideas on your proof. $x \in A\, \Rightarrow \,\inf (A) \leqslant x\, \Rightarrow \, - x \leqslant - \inf (A)\, \Rightarrow \,\sup ( - A) \leqslant - \inf (A)$. Now suppose that $sup ( - A) < - \inf (A)$, then $\inf (A) < - \sup ( - A)$. By definition of inf, $\left( {\exists a \in A} \right)\left[ {\inf (A) \leqslant a < - \sup ( - A)} \right]$. But that means that $\sup ( - A) < - a$, a contradiction. 3. Originally Posted by Plato Here is an outline that may help you get some ideas on your proof. $x \in A\, \Rightarrow \,\inf (A) \leqslant x\, \Rightarrow \, - x \leqslant - \inf (A)\, \Rightarrow \,\sup ( - A) \leqslant - \inf (A)$. Now suppose that $sup ( - A) < - \inf (A)$, then $\inf (A) < - \sup ( - A)$. By definition of inf, $\left( {\exists a \in A} \right)\left[ {\inf (A) \leqslant a < - \sup ( - A)} \right]$. But that means that $\sup ( - A) < - a$, a contradiction. Or $x \in A \Rightarrow x \geq \inf A$. Then $-x \leq -\inf A$. Thus $\sup(-A)$ exists. Now suppose there is some $p$ such that $-x \leq p < -\inf A$. Then $\inf A \leq -p < x$. This contradicts the definition of infimum. Hence $\sup(-A) = - \inf A$. Or $\inf A = -\sup(-A)$.
## Project Management Engineering There are many different methods for project management.  In general, the authors of this site subscribe to the methods and techniques laid out by the Project Management Institute.  The reasoning is that the PMI has developed a comprehensive and scalable process for managing projects across multiple industries.  For a complete and thorough education on Project Management, the Project Management Body of Knowledge (PMBOK) is an excellent resource and should be referenced. It is important to note that every project may not support the techniques laid out by PMI.  For instances like these, it is up to the project manager (PM) to decide to what extent each knowledge area and process should be utilized.  However a PM should be familiar with all aspects laid out in the PMBOK. ### Science Branches Science Applied Science Engineering Management and Systems Engineering Project Management Engineering ### Nomenclature & Symbols • $$AC$$  -  actual cost • $$BAC$$  -  budget at completion • $$CAPEX$$  -  capital expenditure • $$CPI$$  -  cost performance index • $$DCF$$  - discounted cash flow • $$EV$$  -  earned value • $$EAM$$  -  enterprise asset management • $$EWO$$  -  engineering work order • $$PRV$$  -  estimated plant replacement value • $$ISSC$$  -  Information Systems Steering Committee • $$OPEX$$  -  operational expenditure • $$PROJ$$  -  project • $$PM$$  -  project management • $$PME$$  -  project management engineer ## Project Management Engineering GlossaRy ### A • Accured Cost  -  Earmarked for the project and for which payment is due, but has not been made. • Actual Cost  -  The realized cost incured for the work performed on an activity during a specific time period. • Advanced Product Quality Planning  -  A structured method of defining and establishing the steps necessaty to assure that a product satisfies the customer. • Approved Vendor List  -  Ensures purchased products arrive on time and meet your quality control. • Authorization  -  The dicision that triggers the allocation of funding needed to carry out the project. • Authorization Work  -  The effort which has been defined, plus that work for which authorization has been given, but for which defined contract costs have not been agreed upon. ### B • Baseline Schedule  -  A fixed project schedule. • Best Practice  -  Something that we have learned from experience on a number of similar projects. • Bill of Materials  -  A list of parts, raw materials, and accessories, descriptions, part name, part number, quantity, reference designation, procurement type, that make up the assembly or entire product. • Brainstorming  -  The unstructured generation of ideas by a group of people. • Budget  -  The funds allocated to the project that represent the estimated planned expenditures. • Budget at Completion  -  The sum of all budgets established for the work to be performed. • Budget Cost  -  The cost anticipated at the start of a project. • Budgeting and Cost Management  -  The estimating of costs and the setting of an agreed budget, and the management of actual and forcast costs against that budget. • Budgeting  - Time phased financial requirements. ### C • Capital Cost  -  The carrying cost in a balance sheet or acquiring an asset and bringing it to the condition where it is capable of performing its intended function over a future series of periods. • Capital Expenditure  -  The amount of money that needs to be spent on a project to see it from design through completion. • Cash Flow  -  Cash receipts and payments in a specific period. • Change Request  -  Outlines a problem and proposes an action to address the problem. • Conflict Management  - The process of identifting and addressing differences. • Contingencies  -  Planned actions for minimizing the damage caused by a problem. • Constraint  -  A condition or occurance that might limit, restrict, or regulate the project. • Cost Baseline  -  The approved version of work package cost estimates and contingency reserve that can be changed using formal change control procedures. • Cost Benefit Analysis  -  The relationship between the costs of undertaking a task or project, and the benefits likely to arise from the changed situation. • Cost Management Plan  -  A component of a project or program management plan that describes how costs will be planned, structured, and controlled. • Cost Performance Index  - A measure of the cost efficency of budgeted resources expressed as the ratio of earned value to actual cost. • Critical Activity  -  A critical activity has zero or negative float.  This activity has no allowance for work slippage. • Critical Path  -  The path in a project schedule that has the longest duration. • Critical Path Activity  -  Any activity on the critical path in a project schedule. ### D • Deliverable  -  Any unique and verifiable product, result, or capability to perform a service that is performed to complete a process, phase, or project. • Direct Costs  -  Costs specifically attributed to an activity or group of activities without apportionment. • Discounted Cash Flow  -  The relating future cash flows and outflows over the life of a project or operation to a common base value. • Document Control  -  The function of management and controlling product documentation. • Downtime  -  The amount of time that equipment is not operating or out of service, as a result of equipment failure. • Duration  -  The length of time required or planned for the execution of a project activity. ### E • Earned Value  -  The measure of work performed in terms of the budget authorized for that work. • Employee Management & Time Tracking  -  Time tracking software manages labor availability, work order time tracking, reporting, costs on work orders, etc. • Engineering Change Notice  -  A form that communicates the details of an approved change to someone who needs to know about the change. • Engineering Change Request  -  A suggestion that can be submitted to management to solve a problem or make improvement to a product. • Engineering Work Order  -  Initiates an engineering investigation, engineering design activity or engineering modifications to an item of equipment. • Enterprise Asset Management  -  A type of maintenance software that collects and analyzes data for physical assets during all phases of the asset cycle, including the acquisition, maintenance, and disposal phase. • Estimated Plant Replacement Value  -  The approximate cost to replace the existing assets with new assets to ahieve the same production capability. • Estimating  -  An approximation of project time and costs targets that is refined throughout the project life cycle. ### F • Failure Rate  -  The anticipated number of times that an asset or piece of equipment fails in a specific period of time. • Fast Tracking  -  A schedule compression technique in which activities or phases normally done in sequence are performed in parallel for at least a portion of their duration. • Fill Order  -  An order that has had all its requirements met and can be closed. • Fill Rate  -  The percentage of orders that are shipped in full and on time and were met through current available stock. • Finished Goods  -  An item that is manufactured for sale. • Fixed Asset  -  A long-term tangible asset or piece of equipment that a business owns and uses in its operations to generate income. ### G • Good Manufacturing Practice  -  A system of processes, procedures, and documents that help ensure that the products are consistantly produces and controlled according to quality standards. ### H • Historical Data  -  Information on archived projects, including documents and reports. ### I • Incurred Costs  -  Sum of actual and committed costs, whether invoiced/paid or not, at a specific time. • Indirect Cost  -  Costs associated with a project that cannot be directly attributed to an activity or group of activities. • Information Systems Steering Committee  -  A committee that provides management representatives from all the key organizations across the institution. ### L • Lessons Learned  -  A set of statements captured after completion of a project or a portion of a project. ### M • Management Developement  -  All aspects of staff planning, recruitment, developement, training and assessment. • Mission Statement  -  A brief summary, approximately one or two sentences, that sums up the background, purposes and benefits of the project. • Mitigation  -  Actions taken to eliminate or reduce risk by reducing the probability and/or impact of occurrence. N • Negotiation  - Satisfying needs by reaching agreement or compromise with other parties. • New product Developement  -  The total process that takes a service or a product from concept to market. • New Product Introduction  -  All the activities within an organization to define, develope and launch a new or improved product. ### O • Objective  -  Something towards which work is to be directed. • Operational Expenditure  -  The amount of money that needs to be spent after a projects completion to continue its use. • Operative Rule  -  The business rules an organization chooses to enforce as a matter of policy. • Organization  -  An autonomous unit within an enterprise under the management of a single individual or board. • Organization Modeling  -  An analysis technique used to describe roles, responsibilities and reporting structures that exist within an organization. • Organizational Process Asset  -  All materials used by groups within an organization to define, implement, tailor and maintain their processes. • Organizational Unit  -  A recognized association of people in the context of an organization or enterprise. • Overhead  -  Costs incured that cannot be directly related to the prducts or serviced produced. ### P • Peer Review  -  A validation technique in which a small group of stakeholders evaluates a portion of a work product to find errors to improve its quality. • Policy  -  A set of ideas, course of principles of action adopted by or proposed for a system or organization. • Process  -  Also called procedure, an operation or an activity. • Process Center  - A resource or collection of resources, commonly people or machines, where an operation or set of operations is performed. • Process Control  -  The monitoring of the production process through software. • Process Management  -  The act of planning, coordinating, and overseeing processes with a view to improving outputs and reducing costs. • Process Map  - A graphical flowchart identifying the operations in a process, steps in each operation and work time for each step. • Process Model  -  A visual model or representation of the sequencial flow and control logic of a set of related activities or actions. • Process Time  -  The time a job spends at an individual station in a production system from the time the station begins working on it, till the time the station finishes. • Product  -  Something that is produced. • Product Data Management System  -  System used to hold mechanical CAD files, including parts and assembly models as well as drawing files. • Product Scope  -  The features and functions that characterize a product, service, or result. • Production Lifecycle Management  -  Management of the products records, including bill of materials, specifications, changes and revisions from beginning to end. • Production Planning  - The process of devising or estimating the conversion of resources and information to achieve an end. • Production System  - A specific or defined set of operations within a large supply network or value chain that produces technical or physical output to satisfy an external demand. • Project Management  -  The application of knowledge, skills, and principles to a program to achieve the program objectives. • Project Management Plan  -  A document that integrates the program's plan and establishes the management controls and overall plan for intergrating and managing the program's individual components. • Project Manager  -  The stakeholder assigned by the performing organization to manage the work required to achieve the project objectives. ### Q • Quality Management System  -  A framework for product and service developement that optimizes for the continuous improvement of quality. ### R • Risk  -  A future event or problem that exists outside of the control of the project that will have an adveres impact on the project if it occures. • Risk Analysis  -  The examination of risk areas or events to access the propbale consequences for each event, or combination of ecents. • Risk Avoidance  -  A risk response strategy whereby the project team acts to eliminate the threat or protect the project from its impact. • Risk Impact  -  The harm or consequences for a project of a risk if it occures. • Risk Management  -  A process to access potential problems, determine which risks are important to deal with, and impliment strategies to reduce consequences. • Risk Mitigation  -  A risk response strategy whereby the project team acts to decrease the probability of occurance or impact of a threat. • Root Cause Analysis  -  The process of discovering the root causes of problems in order to identify appropriate solutions. ### S • Scope  -  Work that must be performed to deliver a product, service, or their related functions. • Scope Change  -  Any change to the project scope, which includes any adjustment to the cost, quality, or schedule. • Scope Creep  - Adding features and functions to the project scope without approval. • Scope Model  -  A model that defines the boundaries of a business domain or solution. • Specifications  -  Detailed statements of project deliverables that result from requirements and design. • Stakeholder  -  A person or group who has vested intrests that may be affected by the outcome of the project. • Standard Operating Procedure  -  A set of clearly written instructions which outline the steps or tasks needed to complete a job. • Start-to-finish  -  A logical relationship in which a succesor activity cannot finish until a predecessor activity has started. • Start-to-start   -  A logical relationship in which a succesor activity cannot start until a predecessor activity has started. • Stock   - A set of complete transformations or entities. • Successor Activity  - A dependent activity that logically comes after another activity in the schedule. • Supply Chain  -  A sequence of processes involved in the manufacturing, transportation and selling of a product. ### U • User  - A stakeholder, person, device, or system that directly or indirectly accesses a system. ### V • Validation  -  The process of checking a product to ensure that it satisifies its intended use and confronts to its requirements. ### W • Walkthrough  -  A type of peer review in which participants present, discuss, and step through a work product to find errors. • Work Around   -  An immediate or temporary response to an issue for which a prior response had not been planned. • Work Flow  -  The relationship of the activities in a project from start to finish. • Work in Progress  -  The set of entities that are partially transformed within any given process. • Work Load  -  The amount of work units assigned to a resource over a period of time. • Work Product  -  The document, collection of notes or diagrams used by the business analyst during the requirements developement process.
# [pstricks] Is pst-pdf meant to work with arbitrary postscript Gene Selkov selkovjr at gmail.com Wed Sep 16 06:18:35 CEST 2009 Greetings to all PSTricksters out there, I have just stumbled on this line: "The package pst-pdf simplifies the use of graphics from PSTricks and other PostScript code in PDF documents" and decided to give pst-pdf a try, assuming that "other PostScript" would mean "any other". I'm seeing variable degrees of success. To start from afar: I am putting together a book with docbook/DSSSL, dbtexmath and ochem. It builds fine as postscript, but it would be nice to render it as PDF as well (I am trying to use pdfjadetex for that), because conversions from postscript are too inaccurate. Also, testing cross-references in postscript is not easy. The problem at hand: pst-pdf creates broken postscript out of postscript special. I know nothing about postscript and I don't know how broken it is. I get this message when I try to view it in evince (which I believe has a way of calling gs): undefined -21 and when I call gs directly, the message is: Error: /undefined in b12 Operand stack: 30.0 0.0 0.0 18.0 90.0 15.59 9.0 12.0 40.0 15.59 9.0 6.0 Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1862 1 3 %oparray_pop 1861 1 3 %oparray_pop 1845 1 3 %oparray_pop 1739 1 3 %oparray_pop --nostringval-- %errorexec_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- Dictionary stack: --dict:1147/1684(ro)(G)-- --dict:1/20(G)-- --dict:76/200(L)-- --dict:178/300(L)-- --dict:53/200(L)-- Current allocation mode is local Last OS error: 11 Current file position is 1047049 GPL Ghostscript 8.64: Unrecoverable error, exit code 1 The bit that causes this error looks like this: {\unitlength1bp\begin{pspicture}( 74.95, 27.94) {\chemfontname \special{\string" %initialization section 0.500000 setlinewidth /lw 0.500000 def /bw 2.500000 def /bd 1.000000 def /aw 5.000000 def %formula 30.00 0.00 0.00 18.00 b0 90.00 15.59 9.00 12.00 b0 40.00 15.59 9.00 6.00 b12 -30.00 15.59 9.00 18.00 b0 30.00 31.18 0.00 18.00 b0 } % end of special \put(-1.000000,-1.000000){\textcolor{white}{\rule{0.010000pt}{0.010000pt}}} \put(75.952932,28.942750){\textcolor{white}{\rule{0.010000pt}{0.010000pt}}} \put(11.908802,21.000000){\textcolor{white}{\rule{7.359310pt}{6.942750pt}}} \put(11.908802,21.000000){O} \put(17.685334,9.567841){\textcolor{white}{\rule{4.998780pt}{6.577770pt}}} \put(17.685334,9.567841){3} \put(46.765372,5.528625){\textcolor{white}{\rule{28.187560pt}{6.942750pt}}} \put(46.765372,5.528625){COOH} }\end{pspicture}}%s I see "b12" in there -- is it a co-incidence, or is it the element that breaks it? If I replace it with "b0", the figure gets rendered without crashing, but then it has nothing but text in it. For comparison, the following picture is rendered entirely correctly, although it is not as interesting: {\unitlength1bp\begin{pspicture}( 24.65, 8.44) {\chemfontname \special{\string" %initialization section 0.500000 setlinewidth /lw 0.500000 def /bw 2.500000 def /bd 1.000000 def /aw 5.000000 def %formula 0.00 0.00 4.97 6.00 b0 } % end of special \put(-1.000000,-1.000000){\textcolor{white}{\rule{0.010000pt}{0.010000pt}}} \put(25.649340,9.442730){\textcolor{white}{\rule{0.010000pt}{0.010000pt}}} \put(6.000000,0.000000){\textcolor{white}{\rule{18.649340pt}{8.442730pt}}} \put(6.000000,1.499980){NH$_2$} }\end{pspicture}}%s I see no problems at all when the same pictures that result in broken postscript when parsed from a picture environment are instead rendered into external eps files and then referenced from the latex source. But that's not a tenable approach to pictures in this book; there are multiple thousands of them, and I can only manage them inline, or I won't pull it off. I'll appreciate any hints for how to debug this. Thanks, --Gene
Eigenfunctions of an operator 1. Sep 3, 2007 buraqenigma How can i prove that $$u(x)=exp(-x^{2}/2)$$ is the eigenfunction of $$\hat{A} = \frac{d^{2}}{dx^{2}}-x^2$$ 2. Sep 3, 2007 meopemuk You should check that $$\hat{A} u(x)= \lambda u(x)$$ where $\lambda$ is a constant. Eugene. 3. Sep 3, 2007 buraqenigma Thanks sir. thank you i remember it .
It is currently 19 Mar 2018, 04:06 GMAT Club Daily Prep Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History Events & Promotions Events & Promotions in June Open Detailed Calendar Diane find 2 and a half cans of paint are just enough to new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message TAGS: Hide Tags Intern Joined: 25 Jan 2013 Posts: 24 Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 16 Feb 2013, 12:40 1 This post was BOOKMARKED 00:00 Difficulty: 25% (medium) Question Stats: 64% (00:42) correct 36% (00:52) wrong based on 292 sessions HideShow timer Statistics Diane find 2 and a half cans of paint are just enough to paint one third of her room. How many more cans of paint will she need to finish her room and paint a second room of the same size? A. 5 B. 7 and a half C. 10 D. 12 and a half E. 15 [Reveal] Spoiler: OA VP Status: Top MBA Admissions Consultant Joined: 24 Jul 2011 Posts: 1354 GMAT 1: 780 Q51 V48 GRE 1: 1540 Q800 V740 Re: Diane find 2 and a half cans of paint are just........ [#permalink] Show Tags 16 Feb 2013, 22:27 She will need 5 cans to paint the rest of this room and seven and a half for the next room for a total of twleve and half cans. D it is. _________________ GyanOne | Top MBA Rankings and MBA Admissions Blog Top MBA Admissions Consulting | Top MiM Admissions Consulting Premium MBA Essay Review|Best MBA Interview Preparation|Exclusive GMAT coaching Get a FREE Detailed MBA Profile Evaluation | Call us now +91 98998 31738 Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 7991 Location: Pune, India Re: Diane find 2 and a half cans of paint are just........ [#permalink] Show Tags 18 Feb 2013, 21:10 3 KUDOS Expert's post antoxavier wrote: Diane find 2 and a half cans of paint are just enough to paint one third of her room. How many more cans of paint will she need to finish her room and paint a second room of the same size? A. 5 B. 7 and a half C. 10 D. 12 and a half E. 15 Notice that for 1/3 of a room, she used 2.5 cans. For 2 full rooms (i.e. 6 times the initial work), she will need a total of 2.5 * 6 = 15 cans So she needs 15 - 2.5 = 12.5 more cans to finish her room and paint another room of the same size. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for \$199 Veritas Prep Reviews VP Status: Top MBA Admissions Consultant Joined: 24 Jul 2011 Posts: 1354 GMAT 1: 780 Q51 V48 GRE 1: 1540 Q800 V740 Re: Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 04 Mar 2013, 21:56 Other questions based on the unitary method: if-sally-can-paint-a-house-in-4-hours-and-john-can-paint-141777.html lindsay-can-paint-1-x-of-a-certain-room-in-20-minutes-what-59488.html _________________ GyanOne | Top MBA Rankings and MBA Admissions Blog Top MBA Admissions Consulting | Top MiM Admissions Consulting Premium MBA Essay Review|Best MBA Interview Preparation|Exclusive GMAT coaching Get a FREE Detailed MBA Profile Evaluation | Call us now +91 98998 31738 Manager Joined: 14 Oct 2014 Posts: 66 Location: United States GMAT 1: 500 Q36 V23 Re: Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 07 Dec 2014, 12:55 We can use ratios to solve this problem: We know that 2.5 cans = 25/10 is used to paint 1/3 of a room. We need to find how much paint we'll need in order to paint the rest of the room (1-1/3=2/3) and 1 room more, hence 2/3+1=5/3 25/10 : 1/3 = x : 5/3 (25/10)*(5/3) = (1/3)*x 25/6=x/3 6*x=25*3 x=75/6 x=12.5 Manager Joined: 17 Aug 2015 Posts: 105 Re: Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 13 Mar 2016, 06:52 To do this problem we can pick a convenient number. Let us say the total room is 9 parts. To do 3 parts She needs 2.5. To do the next 6, she needs 5. Then to do all 9 she needs 7.5. So total required cans are 7.5+5=12.5. Manager Joined: 09 Jun 2015 Posts: 102 Re: Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 13 Mar 2016, 07:53 antoxavier wrote: Diane find 2 and a half cans of paint are just enough to paint one third of her room. How many more cans of paint will she need to finish her room and paint a second room of the same size? A. 5 B. 7 and a half C. 10 D. 12 and a half E. 15 Diane has to finish painting two rooms of 6 parts, each part equal to one-third of a room. For 1 part she needs 2 and a half cans of paint. For another 5 parts she needs 5*2.5=12.5 cans Manager Joined: 23 Jul 2015 Posts: 166 Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 09 Aug 2017, 04:21 5/2 cans paint 1/3 of the room 1 can paints 2/15 of the room --> to paint a room she needs 7.5 cans She'll need 5 more cans to paint remaining of the room and another 7.5 cans to paint the second room Total 12.5 cans (Choice D) VP Joined: 22 May 2016 Posts: 1420 Diane find 2 and a half cans of paint are just enough to [#permalink] Show Tags 09 Aug 2017, 14:27 antoxavier wrote: Diane find 2 and a half cans of paint are just enough to paint one third of her room. How many more cans of paint will she need to finish her room and paint a second room of the same size? A. 5 B. 7 and a half C. 10 D. 12 and a half E. 15 Proportion with multiplier approach Remaining work = $$\frac{5}{3}$$ Let x = amount of paint needed $$\frac{\frac{5}{2}}{\frac{1}{3}}$$ = $$\frac{x}{\frac{5}{3}}$$ Use the multiplier for the denominator: $$\frac{1}{3}$$ ---> $$\frac{5}{3}$$ means $$\frac{1}{3}$$ was multiplied by 5. So multiply the numerator by 5. $$\frac{5}{2}$$ * 5 = $$\frac{25}{2}$$ or 12$$\frac{1}{2}$$ Straight proportion method Remaining work: $$\frac{5}{3}$$ Let x = amount of paint needed $$\frac{\frac{5}{2}}{\frac{1}{3}}$$ = $$\frac{x}{\frac{5}{3}}$$ $$\frac{1}{3}$$x = $$\frac{5}{2}$$ * $$\frac{5}{3}$$ $$\frac{1}{3}$$x = $$\frac{25}{6}$$ x = $$\frac{25}{6}$$ * 3 x = $$\frac{25}{2}$$ or 12$$\frac{1}{2}$$ _________________ At the still point, there the dance is. -- T.S. Eliot Formerly genxer123 Diane find 2 and a half cans of paint are just enough to   [#permalink] 09 Aug 2017, 14:27 Display posts from previous: Sort by Diane find 2 and a half cans of paint are just enough to new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
# Motivic Galois theory for motives of level ≤ 1 Bertolin, Cristiana (2003) Motivic Galois theory for motives of level ≤ 1. arXiv. Preview PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 208Kb ## Abstract Let T be a Tannakian category over a field k of characteristic 0 and (T) its fundamental group. In this paper we prove that there is a bijection between the otimes-equivalence classes of Tannakian subcategories of T and the normal affine group sub-T-schemes of (T). We apply this result to the Tannakian category T_1(k) generated by motives of niveau 1 defined over k, whose fundamental group is called the motivic Galois group G_mot(T_1(k)) of motives of niveau 1. We find four short exact sequences of affine group sub-T_1(k)-schemes of G_mot(T_1(k)), correlated one to each other by inclusions and projections. Moreover, given a 1-motive M, we compute explicitly the biggest Tannakian subcategory of the one generated by M, whose fundamental group is commutative. Item Type:Article Institutions: Mathematics > Prof. Dr. Uwe Jannsen Identification Number: ValueType arXiv:math/0309379v2arXiv ID Classification: NotationType 14L15;18A22MSC Keywords:Fundamental group of Tannakian categories, Artin motives, 1- motives, motivic Galois groups Subjects:500 Science > 510 Mathematics Status:Published Refereed:Yes, this version has been refereed Created at the University of Regensburg:Yes Owner:Petra Gürster Deposited On:27 Nov 2009 08:04 Item ID:10991 Export bibliographical data Literature of the same author in this repository at BASE
Free Version Easy # Let's Be Discrete APSTAT-JLJAWX Which of the following situations does NOT describe a discrete random variable? A The number of eggs in a carton of $12$ eggs which are found to be cracked or broken. B The number of times a coin will land on heads when tossed $10$ times. C The number of times in one month that a service technician must be called in to repair the office Xerox machine. D The number of days in one week where Javier will have a substitute teacher in at least one of his classes. E The amount of time it takes an experienced runner to run one mile.
A basketball team find that if it charges $25 per ticket , the average attendance per game is 400. For each$ .50 decrease in ticket price , attendance rises by 10. What ticket price yields the maximum revenue? Revenue = Ticket Cost (C) X Attendance (A) is a quadratic.
# Practical ways to find the principal focal length - lens equation There is another way to find the principal focal length of a convex lens but unfortunately, we have to learn another equation, the lens equation. But it is really easy to remember using the following mnemonic: Lens equation IF I DO I DIE I/F=I/(do)+I/(di) 1/F=1/(do)+1/(di) The lens equation is: 1/F=1/(do)+1/(di) Where F = Principal focal length do = Object distance di = Image distance Place a burning candle at one end of a table, a convex lens in the centre of the table and a screen at the other end of the table. Move the screen and convex lens backwards and forwards on the table until a sharp upside-down image of the burning candle is made. Once you have a sharp upside-down image on the screen you can measure do and di and put the figures into the formula. 1/F=1/(do)+1/(di) and work out the principal focal point F. In ray diagram format that would be: An example would be: Question 1 In the experiment below an upside down image of the candle has been clearly formed on the screen. What is the convex lens's principal focal length? Using the lens equation we have 1/F=1/(do)+1/(di) 1/F=1/50+1/46 1/F=0.02+0.21739 1/F=0.041739 F=1/0.041739 F=23.96cm Question 2 Just to show you that this formula works in any unit, let's do the same experiment but measure the distances in mm. What is the principal focal length? Using the lens equation we have. 1/F=1/(do)+1/(di) 1/F=1/500+1/460 1/F=0.0041739 F=1/0.0041739 F=239.58mm
1/2 x = a x + y = 5a In the system of equations above, a is a constant such that 0 <a <1/3. If (x,y) is a solution to the system of equations, what is one possible value of y? (Answer: 0 < x < 1) The easiest way to go here is to plug in! Let’s say a = 0.1, which is certainly between 0 and 1/3. The first equation tells us: From there, we can solve for y in the second equation: So there you go—grid in 0.3 as a possible value of y and move along. To see the whole range of possible answers, plug in the endpoints you’re given for a. If a = 0, then the first equation tells us that x = 0, too. If x and a both equal zero, then the second equation becomes 0 + y = 5(0), which of course means y = 0, too! OK, now plug in 1/3 for a. The first equation tells us: Easy enough to solve for y in the second equation: So if a = 0, y = 0, and if a = 1/3, y = 1. That’s why the full range of possible answers is 0 < y < 1.
# Pauli matrices and Lorentz transformations Consider the Weyl equations: \begin{align} i\sigma^{\mu} \partial_{\mu} \psi_{L} & = 0 \\ i\overline{\sigma}^{\mu} \partial_{\mu} \psi_{R} & = 0, \end{align} where $\sigma^{\mu} = \left ( \sigma^{0}, \sigma^{1}, \sigma^{2}, \sigma^{3} \right )$ and $\overline{\sigma}^{\mu} = \left ( \sigma^{0}, - \sigma^{1}, - \sigma^{2}, - \sigma^{3} \right )$, with $\psi_{L}$ and $\psi_{R}$ being left and right handed Weyl spinors such that $\psi$ is a two-component spinor: $$\psi = \binom{\psi_{L}}{\psi_{R}}.$$ These equations are Lorentz invariant. Does this mean that $\psi_{L}' = A \psi_{L}$ and $A \sigma^{\mu} A^{-1} = a_{\nu}^{\mu} \sigma^{\nu}$, similarly for $\psi_{R}$, where $A$ is a matrix and $a_{\nu}^{\mu}$ is a Lorentz transformation? • This question lacks context. In general, the matrices don't transform in any way, because there is nothing related to Lorentz transformation in their definition. – Prof. Legolasov Jan 11 '17 at 5:02 • I didn't quite understand the question. What is A here? Is it the $2\times 2$ matrix representing Lorentz transformation on $\psi_L$? – SRS Jan 11 '17 at 6:59 It means that the matrices transform like $$\sigma^\mu \mapsto \Lambda^{\nu'}{}_\mu A \sigma^\mu A^{-1}.$$ The left-handed spinor belongs to the $(\frac 1 2, 0)$ representation of the Lorentz group, and the right-handed spinor to the $(0, \frac 1 2)$ representation. (Or maybe the other way around, I'm not sure right now.) This view with representations can be found in Weinberg, Vol. 1, Ch. 2. Viewing spinors in terms of their transformation properties is more the view of texts like Pirani's chapter in Lectures on General Relativity, and Penrose and Rindler's Spinor's and Spacetime. They are equivalent, of course, and I think you need both views to really get it, but you may find one easier to start with than the other.
[NTG-context] Nolimits not working with Unicode characters Fri May 13 17:51:27 CEST 2016 On Fri, 13 May 2016, Hans Åberg wrote: > There may not be so many symbols: the “large operators" in the Tex Book, > p. 435, though Unicode have more, and some others like √ U+221A for > \sqrt. If you want, you can collect such mappings in a separate module. > I could not make your code working - does it require a later LuaTex > version (than 0.80.0)? Here is a complete working example (OT: The location of the limit is still ugly with xits fonts) \setupbodyfont[xits,10pt] \appendtoks \catcode∫=\activecatcode \letcharcode ∫ \int \to \everymathematics \setupmathematics[integral=nolimits] Nolimits $\int_0^∞ f ω$, and $∫_0^∞ f ω$ \startformula \int_0^∞ f ω,\quad ∫_0^∞ f ω \stopformula
# Math Help - Am i correct? 1. ## Am i correct? I have to find a basis for the vector space spanned by these 3 column vectors: [1;-1;2] [2;-3;2] and [1;-3;2] I have written the vectors as rows of an array and solved as far as possible: pivot rows are in bold and -1--> means add to -1*pivot row 1 -1 2 | 2 -3 2| -2 --> 1 -3 2| -1 --> -1 -2| -2 -6| -2--> 0 0 -2 does this mean that one possible basis is: {[1;-2;2],[0;-1;-2],[0;0;-2]}? Also does this mean that the dimension for the subspace is 3? 2. Originally Posted by deragon999 I have to find a basis for the vector space spanned by these 3 column vectors: [1;-1;2] [2;-3;2] and [1;-3;2] I have written the vectors as rows of an array and solved as far as possible: pivot rows are in bold and -1--> means add to -1*pivot row 1 -1 2 | 2 -3 2| -2 --> 1 -3 2| -1 --> -1 -2| -2 -6| -2--> 0 0 -2 does this mean that one possible basis is: {[1;-2;2],[0;-1;-2],[0;0;-2]}? Also does this mean that the dimension for the subspace is 3? It's enough to prove that the three vectors are independent. Then the three vectors themselves form a basis for the vector space. Dimension of vector space = size of basis therefore dimension of the vector space is equal to 3. 3. ## Thanks Ok i fully understand that..but just because i spent the time on it..is my basis also true..or does that way only work if the vectors are Dependant?
# How to convert decimal to fraction ? Hello friends, Welcome to my article how to convert decimal to fraction First of all we should write the article on the page of the notebook. ## how to convert decimal to fraction ? (DECIMAL) A decimal number is a number that has a dot (.) or a decimal point between the digits. Basically, decimals are nothing but a fraction with a denominator of 10 or a multiple of 10. For example, 4.23, 12.65, 55.1, 3.8, 7.234, etc. are decimals. (FRACTION) A fraction is a part of a whole number. It is represented as the ratio of two numbers a/b, where P and Q are integers, also Q 0. The two numbers are called numerator and denominator. For example, 2/3 is a division of 2, 5/4 is a division of 5, etc. We can perform all arithmetic operations on fractions. There are three types of fractions, proper, improper and mixed. ## How to convert a decimal number to a fraction? Write the given decimal number as P /Q in the form Q =1. Now multiply and divide the numerator and denominator by 10n to n decimal places. Now simplify the obtained fraction. OR, To convert a decimal number into a fraction, we should write 1 (one) just below the decimal point and write 0 (zero) in front of 1 for as many digits as there are in front of the decimal. In this way, the new number formed is multiplied from the given number up and down. ## Let us understand this with an example. Convert 25.805 (Decimal Number) to Fraction:- \displaystyle 25.805\times \frac{{1000}}{{1000}} \displaystyle \frac{{25805.000}}{{1000}} \displaystyle \frac{{25805}}{{1000}} \displaystyle \frac{{5161\times 5}}{{200\times 5}} \displaystyle \frac{{5161}}{{200}}
# Feasibility of Liquid Water at Bottom of Valles Marieneris Last night I was looking at NASA's equations for pressure and temperature of the atmosphere of Mars. P (in kiloPascals) = $$0.699 \exp^{-0.00009 h}$$ The Valles Marineris is 11 kilometers deep at it's lowest point. $$P = 0.699 \exp^{(-0.00009) (-11000)} =$$ 1.99 KPa = 0.28 psi Correction: a commenter informed that Valles Marineries is 4.5 kilometers beneath "sea level" at it's lowest point. $$P = 0.699 \exp^{(-0.00009) (-4500)} =$$ 1.048 KPa = 0.15 psi Then I looked up this chart of water boiling temperatures at low pressures, and saw that the boiling temperature of water in this pressure is a very reasonable, if slightly chilly $$60^{o} F / 15^{o}C$$ Correction: $$45^{o} F / 7^{o}C$$ I wonder if it's possible for surface water (rivers, lakes) on Mars at the bottom of the Valles Marineris? Have we completely ruled it out with satellites surveying the region? Would the relative dryness of the rest of the planet generally transport any water-bearing air in the region somewhere else (eventually drying the place out), or since it's at the bottom of a chasm, might the humidity remain? • Unfortunately, your statement of "11 kilometers deep at it's lowest point." is inaccurate. The deepest point is only -4.5km below datum. The valley is located in an elevated plateau, thus actual altitude does not match apparent depth. You would do better to go to the Hellas impact basin, which goes to -8km below datum. – PcMan Mar 3 at 0:50 • Thanks for the tip. – James McLellan Mar 3 at 10:51 • – uhoh Mar 3 at 11:24
Consider a lawnmower of weight w which can slide across a horizontal surface with a coefficient of friction mu. In this problem the lawnmower is pushed using a massless handle, which makes an angle theta with the horizontal. Assume that F_h, the force exerted by the handle, is parallel to the handle. Find the magnitude, F_h, of the force required to slide the lawnmower over the ground at constant speed by pushing the handle. The solution for F_h has a singularity (that is, becomes infinitely large) at a certain angle theta_critical. For any angle \theta > \theta_{\rm critical}, the expression for F_h will be negative. However, a negative applied force F_h would reverse the direction of friction acting on the lawnmower, and thus this is not a physically acceptable solution. In fact, the increased normal force at these large angles makes the force of friction too large to move the lawnmower at all. Find an expression for \tan(\theta_{\rm critical}).
Forum Index Register Members List Events Mark Forums Read Help International Skeptics Forum Merged: Electric Sun Theory (Split from: CME's, active regions and high energy flares) Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. Tags Alfven waves , Birkeland currents , hannes alfven , Kristian Birkeland 17th November 2011, 12:36 PM #5041 Argumemnon World Maker     Join Date: Oct 2005 Location: In the thick of things Posts: 66,185 Originally Posted by Michael Mozina I guess I just "assumed" that since you got involved in this particular thread, of all your choices of various threads on this board You think I can only participate in one thread at any time ? Quote: that you expected to be 'educated' in electric sun theory. Apparently not by you, but it's no fault of mine. Quote: You've asked me for evidence as well. I was simply discussing the topic with you. Yeah, rhetoric in place of evidence. That'll work. Quote: I try to respond to EVERYONE that posts here, not just you Belz. And yet you keep addressing me while answering other people. I just found it odd. Still waiting for that evidence for electrical discharges in plasma. __________________ 17th November 2011, 12:38 PM #5042 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by GeeMack ... the energy released in solar flares and the heating of the corona was a theory which he pioneered and aptly nicknamed "magnetic reconnection". The term simply relates to an E induced, *ELECTRICAL DISCHARGE* process in plasma. Anywhere you see that term, it simply refers to an electrical discharge process in a plasma. Last edited by Michael Mozina; 17th November 2011 at 12:45 PM. 17th November 2011, 12:41 PM #5043 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Belz... Apparently not by you, but it's no fault of mine. It's certainly no fault of mine. Quote: Still waiting for that evidence for electrical discharges in plasma. What did you think of Dungey's original paper that I handed you? Is that "evidence" in your opinion? 17th November 2011, 01:03 PM #5044 Reality Check Penultimate Amazing   Join Date: Mar 2008 Location: New Zealand Posts: 20,136 Originally Posted by Michael Mozina What did you think of Dungey's original paper that I handed you? Is that "evidence" in your opinion? I suspect that Belz... (unlike you) can understand what he reads. Thus he will not be fooled into using an obsolete term for large high current density: From Michael Mozina's delusion about electrical discharges in plasma: Originally Posted by Reality Check What remains is Dungey's obsolete usageDungey's and Peratt's definition of discharge are different. 13th January 2011 (10 months and counting) Dungey's 'electric discharge' = high current density in magnetic reconnection 18th October 2011 MM: Citing Dungey means that cause of solar flares is magnetic reconnection! 8th November 2011 If you want to continue to documenting your obsession that electrical discharges cause solar flares then please do so and we will conintinue to point out that it is a delusion. If you want to change that assertion to "electrical discharges happen solar flares" then all you are doing is documenting your ignorance by using an obsolete term (or an obsession with the term "electrical discharge"). __________________ NASA Finds Direct Proof of Dark Matter (another observation) (and Abell 520) Electric comets still do not exist! 17th November 2011, 01:42 PM #5047 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Reality Check How stupid - I do not have to read the entire book to see that Somov states that magnetic Field lines reconnect. How stupid you ignored the parts in YELLOW! Quote: 4.4.2 Reconnection in a Vacuum. ... Reconnection in vacuum is a real physical process: magnetic field lines move to the X-type neutral point and reconnect in it as well as | the electric field is induced and can accelerate a charge particle or | particles in the vicinity of the neutral point. You can't INDUCE an E field in a PURE VACUUM that is utterly devoid of plasma particles. He's also talking about ACCELERATING charged particles. In that cause I don't know what he even means by the term "vacuum". It sounds like a "low density" plasma as best as I can tell, based on the inducement of E and the fact it's accelerating charged particles. I saw how you KLUDGED the term 'electrical discharge in cosmic plasma' by Peratt. I've also read at least one other entire plasma physics book by Somov where he spends an entire chapter tying this all back to the E orientation, and he clearly explained "reconnection" as an E induced process in plasma. Last edited by Michael Mozina; 17th November 2011 at 01:46 PM. 17th November 2011, 01:49 PM #5048 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 http://www.internationalskeptics.com...postcount=5027 Honestly Clinger, Your exceptional math skills may in fact be your saving grace if you ponder the problem from the standpoint of sources and sinks in vector calculus. You're literally *OVERTHINKING* the problem IMO and trying to do EVERYTHING related to 'reconnection' in a single step, in a vacuum. You'd literally need a monopole to pull off that trick. I seriously doubt that any of the rest of the EU haters even comprehends what a source or sink might be in relationship to vector fields, but I have to believe that you do. Your math skills are very likely to be your saving grace here if you use them IMO. Either BOTH Wiki pages are incorrect, or you are incorrectly and improperly turning X into a source and sink. It's really that simple. That is probably the BEST shot I can take at reaching you at the level of mathematics. Last edited by Michael Mozina; 17th November 2011 at 01:59 PM. 17th November 2011, 01:55 PM #5049 Reality Check Penultimate Amazing   Join Date: Mar 2008 Location: New Zealand Posts: 20,136 Originally Posted by Michael Mozina How stupid you ignored the parts in YELLOW! How stupid you ignored the parts in YELLOW, RED and BLUE! Quote: 4.4.2 Reconnection in a Vacuum. ... Reconnection in vacuum is a real physical process: magnetic field lines move to the X-type neutral point and reconnect in it as well as | the electric field is induced and can accelerate a charge particle or | particles in the vicinity of the neutral point. In a vacuum an electric field is induced. This is basic physics because you have a changing magnetic field and so Maxwell's equations tell you that that an electric field is induced. Now what happens if you have a charge particle or particles (i.e. you go from a vacuum to a non-vacuum)? Again basic physics - the induced electric field can accelerate them ! Somov introduces charged particles at the end of this section because his next section is "4.4.3. Reconnection in plasma". __________________ NASA Finds Direct Proof of Dark Matter (another observation) (and Abell 520) Electric comets still do not exist! 17th November 2011, 02:03 PM #5050 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Reality Check How stupid you ignored the parts in YELLOW, RED and BLUE! No, I'm not. Somov's "vacuum" evidently includes plasma particles because he also describes INDUCED E fields and the acceleration of PARTICLES. He has actual real PARTICLES in his vacuum apparently. It's not a "pure vacuum". Clinger is working in a PURE vacuum until he graduates to plasma physics. Since Clinger's math skills are exceptional, there's definitely still hope for him. You don't even know what a source or a sink might be as it relates to vector calculus. 17th November 2011, 02:04 PM #5051 Reality Check Penultimate Amazing   Join Date: Mar 2008 Location: New Zealand Posts: 20,136 Originally Posted by Michael Mozina I seriously doubt that any of the rest of the EU haters even comprehends what a source or sink might be in relationship to vector fields, You are wrong: No one here (excecpt in your head) is an "EU hater". I for example pity EUers. Your fantasy has little to do with the EU fantasy. None of then that I have seen seems ignorant enough to claim actual electrical discharges in plasma. MM: Try to get it right for once - I pity you, not hate you I know what sources and sinks are in relation to vecorr fields. They are ... sources and sinks in the vector field ! I know that this has little to do with the fact that magnetic field lines that cross a neutral point have to break: MM: The definition of magnetic field lines = no lines at a neutral point __________________ NASA Finds Direct Proof of Dark Matter (another observation) (and Abell 520) Electric comets still do not exist! 17th November 2011, 02:07 PM #5052 GeeMack Banned   Join Date: Aug 2007 Posts: 7,235 Originally Posted by Michael Mozina The term simply relates to an E induced, *ELECTRICAL DISCHARGE* process in plasma. Anywhere you see that term, it simply refers to an electrical discharge process in a plasma. That is as dishonest as any argument yet offered in this thread. Magnetic reconnection does not translate to electrical discharge. Dungey wasn't the idiot the electric Sun proponents seem to believe he was. If he had meant to suggest magnetic reconnection was synonymous with electrical discharge, he would have said that. He didn't. Any claim to that effect is simply false. 17th November 2011, 02:12 PM #5053 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by GeeMack That is as dishonest as any argument yet offered in this thread. Magnetic reconnection does not translate to electrical discharge. No, this is just the most dishonest statement yet offered by a hater. It's an ELECTRICAL DISCHARGE PROCESS IN PLASMA according Dungey. You and RC parrot a single term "magnetic reconnection" yet have no understanding whatsoever of the physics it represents. You're like a couple of parrots without a clue. 17th November 2011, 02:31 PM #5054 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Clinger: The only thing that happens in your vacuum contraption as you change the current in the rods is the vacuum experiences changes in the magnetic flux density *OVER THE WHOLE FIELD*, not only at X. No induced fields will occur in the vacuum, and no particles will be accelerated in the vacuum. No electrical discharge will ensue in the vacuum. When you get to part 5 and ADD PLASMA, I promise you that all of your maths will (eventually) make sense and Gauss's laws will not be violated in any way, shape or form. E fields will be induced, current will flow, electrons will fly and electrical discharges will ensue. Last edited by Michael Mozina; 17th November 2011 at 02:34 PM. 17th November 2011, 03:02 PM #5055 Reality Check Penultimate Amazing   Join Date: Mar 2008 Location: New Zealand Posts: 20,136 Originally Posted by Michael Mozina It's an ELECTRICAL DISCHARGE PROCESS IN PLASMA according Dungey. Still wrong: Michael Mozina's delusion about electrical discharges in plasmaIt is not a PROCESS . It is an obsolete use of the term ELECTRICAL DISCHARGE to describe large current density in magnetic reconnection. __________________ NASA Finds Direct Proof of Dark Matter (another observation) (and Abell 520) Electric comets still do not exist! 17th November 2011, 03:25 PM #5056 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Reality Check Still wrong: Michael Mozina's delusion about electrical discharges in plasmaIt is not a PROCESS . It is an obsolete use of the term ELECTRICAL DISCHARGE to describe large current density in magnetic reconnection. Keep in mind Clinger that RC and GM do not give a damn about whether you're right or wrong in terms of physics, nor do they care what you look like at the end of this process. In terms of basic EM theory, neither one of them could help you (I know for sure because I quizzed them this weekend), and neither one of them has your math skills. You're pretty much swinging in the wind on your own now. Neither of them even understands the significance of the terms source and sink at it relates to vector calculus and our discussion. IMO it's better to bite the bullet now and be done and move on to part 5 than to let those two lead you down the primrose path any further. Your call. Last edited by Michael Mozina; 17th November 2011 at 03:35 PM. 17th November 2011, 03:59 PM #5057 Reality Check Penultimate Amazing   Join Date: Mar 2008 Location: New Zealand Posts: 20,136 Originally Posted by Michael Mozina Keep in mind Clinger that RC and GM do not give a damn about whether you're right or wrong in terms of physics,... Keep in mind Clinger that Michael Mozina is lying. I care about the physics being right or wrong. I see no problems with the physics or math in your posts. I can see a good educator providing a clear lesson on magnetic reconnection. The end of the process should be a nice undergraduate level lesson on the MR basics as covered in MR textbooks. You should follow my previous advice and disregard MM's previous demands that you not go onto part 5. __________________ NASA Finds Direct Proof of Dark Matter (another observation) (and Abell 520) Electric comets still do not exist! Last edited by Reality Check; 17th November 2011 at 04:03 PM. 17th November 2011, 09:37 PM #5058 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 Induced fields in a vacuum Originally Posted by Michael Mozina You can't INDUCE an E field in a PURE VACUUM that is utterly devoid of plasma particles. Originally Posted by Michael Mozina No induced fields will occur in the vacuum, ... You have made that particular claim now multiple times in this thread, and it's just plain wrong. Of course you will get induced fields in a vacuum. How could you possibly not get induced fields in a vacuum? $ Faraday's Law of Induction: \nabla \times \hbox{{\bf E}} = \frac{\partial \hbox{{\bf B}}}{\partial t} Ampere's Law: \nabla \times \hbox{{\bf B}} = \mu_0 \hbox{{\bf J}} + \mu_0 \epsilon_0 \frac{\partial \hbox{{\bf E}}}{\partial t}$ Faraday's law of induction works like gangbusters in a vacuum. Set the current density (J) in Ampere's Law to zero and we are back to the vacuum again. Change the electric field and you get an induced magnetic field. Of course in a vacuum there are no charged particles to accelerate, but there are still fields (which you can check by putting test charges into your vacuum and watching them bend to the will of the fields present). Yes, you certainly can induce B and E fields in a vacuum. __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 17th November 2011, 11:38 PM #5060 The Man Scourge, of the supernatural     Join Date: Jun 2007 Location: Poughkeepsie, NY Posts: 11,971 Originally Posted by Michael Mozina How stupid you ignored the parts in YELLOW! You can't INDUCE an E field in a PURE VACUUM that is utterly devoid of plasma particles. He's also talking about ACCELERATING charged particles. In that cause I don't know what he even means by the term "vacuum". It sounds like a "low density" plasma as best as I can tell, based on the inducement of E and the fact it's accelerating charged particles. I saw how you KLUDGED the term 'electrical discharge in cosmic plasma' by Peratt. I've also read at least one other entire plasma physics book by Somov where he spends an entire chapter tying this all back to the E orientation, and he clearly explained "reconnection" as an E induced process in plasma. What??? "You can't INDUCE an E field in a PURE VACUUM" So charged particles need some "plasma particles" between them? You do understand that the vacuum has electrical properties as well as magnetic (otherwise EM waves could not travel) don't you? Isn't that the whole principle of the "EU" that electrical properties always dominate? It seems now you're saying that an E field requires some intervening plasma while the EM properties of the vacuum (and extermination) demonstrates that a magnetic field doesn't. __________________ BRAINZZZZZZZZ 18th November 2011, 03:43 AM #5061 Argumemnon World Maker     Join Date: Oct 2005 Location: In the thick of things Posts: 66,185 Originally Posted by Michael Mozina It's certainly no fault of mine. "You started it." "No, you did !" __________________ 18th November 2011, 09:15 AM #5062 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson You have made that particular claim now multiple times in this thread, and it's just plain wrong. Of course you will get induced fields in a vacuum. How could you possibly not get induced fields in a vacuum? $ Faraday's Law of Induction: \nabla \times \hbox{{\bf E}} = \frac{\partial \hbox{{\bf B}}}{\partial t} Ampere's Law: \nabla \times \hbox{{\bf B}} = \mu_0 \hbox{{\bf J}} + \mu_0 \epsilon_0 \frac{\partial \hbox{{\bf E}}}{\partial t}$ Faraday's law of induction works like gangbusters in a vacuum. Set the current density (J) in Ampere's Law to zero and we are back to the vacuum again. Change the electric field and you get an induced magnetic field. Of course in a vacuum there are no charged particles to accelerate, but there are still fields (which you can check by putting test charges into your vacuum and watching them bend to the will of the fields present). Yes, you certainly can induce B and E fields in a vacuum. As you and The Man have surmised and correctly pointed out, that particular statement of mine is both sloppy and factually incorrect. I should have used (and been using) the term "electrical current", not "E field". My apologies, and thank you both for pointing it out. I amend my statements to read: "You can't INDUCE an electrical current in a PURE VACUUM that is utterly devoid of plasma particles. No induced current will occur in the vacuum. FYI Tim, I'll spend some time this weekend to see if I can find the paper in question by Priest about monopoles. In the meantime, since you and The Man have a proven track record now of pulling the splinters and twigs out of my eye over my errors in this discussion, how about spending some time with Clinger explaining to him that *IN A VACUUM* (not a plasma) B field lines have no source or sink, no beginning or ending and that a NULL is not a "beginning" nor an "ending" of any B line? That would really help move things along. Last edited by Michael Mozina; 18th November 2011 at 09:45 AM. 18th November 2011, 09:54 AM #5063 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 FYI Tim, neither of those two papers is the monopole paper in question. The particular paper I'm thinking of was by Priest only (no other author that I recall), and it didn't discuss plasma physics to my recollection. It was pretty much a fully B oriented paper as I recall. 18th November 2011, 10:02 AM #5064 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Holy Cow! Is Priest a paper writing machine or what? He makes Alfven look lazy! ADS has over 500 listings by Priest! This is a bit like looking for a needle in a haystack. I'll work on it Tim, but I have a real job to work today as well. I'll find it, but it will take me some time. 18th November 2011, 11:14 AM #5065 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson The properties of sources and sinks of a linear force-free field Demoulin & Priest; Astronomy and Astrophysics 258(2): 535-541 (May 1992) Tim, check out section 3.1 between equations 16 and 17 where Priests mathematically turns moving charged particles (aka current) into "magnetic charges" (aka monopoles). Are you sure that you and I can't just agree on "current reconnection" and be done with it and move on to "magnetic flux tubes"? You of all people should be able to see how Priest is converting electrons into monopoles between equations 16 and 17. 18th November 2011, 12:14 PM #5066 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson In these papers, monopolar fields are used as a device to simplify calculation. You hit the nail on the head IMO, but that's the WHOLE PROBLEM. They're basically CONVERTING current into monopoles mathematically to "simplify" the equations for B Tim. It's still CURRENT at the level of physics, even if it's "magnetic charge" for the purposes of simplifying for a B orientation. I really don't understand why you and I cannot just agree on the term "current reconnection" and move on. I DO NOT DOUBT is it's a useful tool for simplifying the math to call it "magnetic charge". I DO DOUBT that monopoles actually physically exist, and I do not doubt that it's actually CURRENT that does the 'reconnecting'. Last edited by Michael Mozina; 18th November 2011 at 12:18 PM. 18th November 2011, 01:09 PM #5067 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Tim, think about how confusing that "device" as you call it, must be to poor Clinger about now. Holy smokes! IMO basic theory is hard enough, and plasma physics is CHALLENGING for anyone, but taking "poetic license" with physics is unacceptable IMO. I really do feel for sorry for any poor uninitiated soul trying to unravel all this "piss poor" naming nonsense. Even a really good mathematician like Clinger is bound to completely confuse the PROCESS as he's been doing now since day one. If a good mathematician gets confused, what hope is there for a struggling college student to grasp the "magnetic reconnection" process correctly? If you simply called it "current reconnection", it would make perfect sense to that graduating freshman. Honestly, someone competent, BESIDES ME on basic theory and sources and sinks needs to set Clinger straight. I'm dying to find out if he's coming out of the closet in part five and this is just annoying at this point. Please Tim or The Man explain to Clinger that B field have no source, no sink, no beginning and no ending. Do so privately if you prefer. If and when anyone finds a REAL monopole rather than an an ELECTRON RENAMING DEVICE that allow mathematical equations to be simplified for B, then you can call it "magnetic reconnection" with my blessings. What poor Clinger has gone through over the past year however DEMONSTRATES CONCLUSIVELY that the name selected by Dungey to describe an electrical discharge in a plasma *STINKS TO HIGH HEAVEN*. It's TOO DAMN CONFUSING! Last edited by Michael Mozina; 18th November 2011 at 01:19 PM. 19th November 2011, 05:34 PM #5068 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 "Current Reconnection" and Magnetic Monopoles Why Not "Current Reconnection"? Originally Posted by Michael Mozina I really don't understand why you and I cannot just agree on the term "current reconnection" and move on. { ... } I do not doubt that it's actually CURRENT that does the 'reconnecting'. Because "current reconnection" is a very poor description of what actually, physically happens. Currents "reconnect", if that word is even applicable, only after the magnetic field topology has changed ("magnetic reconnection" is a change in the topology of the magnetic field). The energy that powers the "reconnection" of the currents is derived directly from the magnetic field, which has topologically transformed into a lower energy state. The energy source is the reconnection of the magnetic field and therefore "magnetic reconnection" is the correct and proper description of what physically happens. I do not doubt that it is actually the magnetic field that does the reconnecting. The physics is really obvious. Magnetic Monopoles and Magnetic Reconnection Originally Posted by Michael Mozina Originally Posted by Tim Thompson In these papers, monopolar fields are used as a device to simplify calculation. You hit the nail on the head IMO, but that's the WHOLE PROBLEM. They're basically CONVERTING current into monopoles mathematically to "simplify" the equations for B Tim. It's still CURRENT at the level of physics, even if it's "magnetic charge" for the purposes of simplifying for a B orientation. It is certainly physically correct that all magnetic fields are generated by the motion of electrically charged particles, and I have no problem calling those moving charged particles a "current", so long as we understand that electrically neutral currents are allowed (meaning that the number of negative charged particles is equal to the number of positive charged particles). Originally Posted by Michael Mozina I DO NOT DOUBT is it's a useful tool for simplifying the math to call it "magnetic charge". I DO DOUBT that monopoles actually physically exist, and I do not doubt that it's actually CURRENT that does the 'reconnecting'. Well, whether or not magnetic monopoles actually physically exist is irrelevant anyway. Their actual physical existence is not assumed in the paper, which in fact goes on to prove that magnetic monopoles cannot exist in a linear force-free field. So I don't see any problem. You see, the magnetic field has a geometry, a shape. We know that the magnetic field is actually generated by electrical currents, but that same shape could be generated by an arbitrary collection of magnetic monopoles. So the authors replace the current by magnetic monopoles, which they can then use as sources and sinks to create a multipole expansion of the magnetic field. Like I said, and you apparently agree, this is "a device to simplify calculation", and it really is not supposed to have anything at all to do with the physics of what is going on, it's just a computational trick. And actually, it's not a very useful tool and I don't think it has been done again since then. If you look at the book "Magnetic Reconnection: MHD Theory and Applications" by Priest & Forbes, Cambridge University Press, 2000, you will not see any mention of magnetic monopoles or monopolar fields anywhere. I can find no other papers that use this trick either. It was a passing fad that has passed, and is in fact no longer relevant to any real discussion of magnetic reconnection. I will address Clinger and the termination of field lines in another post. __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 19th November 2011, 06:08 PM #5069 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 Magnetic field lines end at null points Originally Posted by Michael Mozina Honestly, someone competent, BESIDES ME on basic theory and sources and sinks needs to set Clinger straight. I don't think Clinger needs to be set straight since I don't think he has done or said anything wrong. He never said there was a "source" or a "sink" for the magnetic field lines, he simply said that they can end at a null point, and on that hs is completely correct. It is certainly true that a magnetic field line cannot end in the "flapping in the breeze" sense and the reason is easy enough to understand. If the end of the field line were "flapping in the breeze" then it would be easy to obtain a non-zero divergence over a Gaussian surface element circumscribed around the end point, violating Gauss' Law. In the case of a null point, there is no such surface; every surface circumscribed around a null point will show a zero divergence and satisfy Gauss' Law. You are confusing the issue yourself by appealing to popular language and ignoring the physics. The one and only real physical principle involved is Gauss' Law, and any magnetic field configuration that does not violate Gauss' Law (or some other law) is allowed. "Field lines cannot end" is not a law of physics, but Gauss' Law is. Magnetic field lines cannot end at a "source" or a "sink", either of which would be a physical magnetic monopole. However, a field line certainly can terminate ("end") at a null point or neutral point, which, as one might guess from the name, is neither a source nor a sink, therefore respecting Gauss' Law. Clinger is correct, Mozina is wrong. __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 19th November 2011, 06:56 PM #5070 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson Clinger is correct, Mozina is wrong. IMO, it's highly disappointing you chose "hate" over science Tim. You certainly didn't do Clinger any favors. I can't believe that response even *AFTER* I took the time to SHOW YOU which EQUATION Priest used to convert electrons into monopoles (magnetic charges). The *SOURCE* is the ELECTRIC field, and the electrons in that paper Tim, not the magnetic NULL! GRR. I'm taking the night off. I'll deal with your nonsense later. Last edited by Michael Mozina; 19th November 2011 at 07:00 PM. 19th November 2011, 09:50 PM #5071 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson I don't think Clinger needs to be set straight since I don't think he has done or said anything wrong. Ok Tim, what do YOU PERSONALLY expect to "reconnect" in part 4, in a vacuum, *WITHOUT* plasma? 19th November 2011, 10:23 PM #5072 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 What "reconnects"? Originally Posted by Michael Mozina Ok Tim, what do YOU PERSONALLY expect to "reconnect" in part 4, in a vacuum, *WITHOUT* plasma? Physically: Nothing "reconnects", that's just a word we use to name the process. What physically happens is that the magnetic field changes from a higher energy state to a lower energy state, and the energy lost by the field escapes as electromagnetic photons. In the presence of a plasma, that energy will transfer to the plasma and manifest itself as an impulsive increase in plasma kinetic energy. Mathematically: Field lines are the mathematical tool of choice to describe and analyze any field, ever since Maxwell. Mathematically, the lines of force representing the physical magnetic field literally reconnect, resulting in a change in the topology of the field. The name, "magnetic reconnection", comes from this mathematical reconnection of field lines. That's what I, personally think. __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 19th November 2011, 10:38 PM #5073 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson Physically: Nothing "reconnects", that's just a word we use to name the process. You mean an electrical discharge process that requires an INDUCED E field followed by an electrical discharge? How can that POSSIBLY NOT involve "current reconnection"? Quote: What physically happens is that the magnetic field changes from a higher energy state to a lower energy state, and the energy lost by the field escapes as electromagnetic photons. In the presence of a plasma, that energy will transfer to the plasma and manifest itself as an impulsive increase in plasma kinetic energy. http://www.internationalskeptics.com...postcount=3861 Quote: I suggested that experiment to you because it would have helped you to understand that Dungey is describing magnetic reconnection. There is no "circuit reconnection" in Dungey's paper. There is no "current reconnection" in Dungey's paper. The magnetic reconnection described in Dungey's paper can be reproduced without plasma. The magnetic reconnection described in Dungey's paper can occur with a near-zero electric (E) field. Which of these statements by Clinger is true and which is false Tim? 19th November 2011, 10:42 PM #5074 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson The name, "magnetic reconnection", comes from this mathematical reconnection of field lines. That's what I, personally think. I personally think that unless you or Clinger has a monopole in your pocket, no B field lines have a source, a sink, a beginning or an ending Tim. You're playing with fire at this point IMO. I've even taken the time to show you WHICH equations were involved in Priests "magic" where he turns a E field source into a B field "source" by turning electrons into "magnetic charges"! Come on! What's it going to take? 19th November 2011, 11:34 PM #5075 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 What "reconnects"? II Originally Posted by Michael Mozina You mean an electrical discharge process that requires an INDUCED E field followed by an electrical discharge? How can that POSSIBLY NOT involve "current reconnection"? It certainly involves currents, but for the most part "reconnection" is a pretty lousy description of what the currents physically do. They accelerate. They energize. Say that the process involves "current acceleration" or "current energizing" and that's fine with me. But see your own statement: The E-field comes first, and then the currents accelerate. That's just what I have been saying all along. The electric field that accelerates the charged particles comes into existence as a result of the "reconnection" of the magnetic field. It's called "magnetic reconnection" because that reconfiguration of the magnetic field is the local energy source for everything else. Originally Posted by Michael Mozina Originally Posted by W.D.Clinger I suggested that experiment to you because it would have helped you to understand thatDungey is describing magnetic reconnection. There is no "circuit reconnection" in Dungey's paper. There is no "current reconnection" in Dungey's paper. The magnetic reconnection described in Dungey's paper can be reproduced without plasma. The magnetic reconnection described in Dungey's paper can occur with a near-zero electric (E) field. Which of these statements by Clinger is true and which is false Tim? The first 4 are obviously true. The last one about a near-zero electric field probably is, at least in principle. But I would want to look at Dungey's paper again to make sure there is a specific compatibility with that specific paper. Some of these experiments are performed with external applied fields in addition to the fields generated in the experiment. __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 19th November 2011, 11:47 PM #5076 Tim Thompson Muse     Join Date: Dec 2008 Posts: 969 Priest & Monopoles & Magnetic Reconnection II Originally Posted by Michael Mozina I personally think that unless you or Clinger has a monopole in your pocket, no B field lines have a source, a sink, a beginning or an ending Tim. No sources & no sinks, I agree. But I already said that, you know, ... Originally Posted by Tim Thompson It is certainly true that a magnetic field line cannot end in the "flapping in the breeze" sense and the reason is easy enough to understand. If the end of the field line were "flapping in the breeze" then it would be easy to obtain a non-zero divergence over a Gaussian surface element circumscribed around the end point, violating Gauss' Law. { ... } Magnetic field lines cannot end at a "source" or a "sink", either of which would be a physical magnetic monopole. However, a field line certainly can terminate ("end") at a null point or neutral point, which, as one might guess from the name, is neither a source nor a sink, therefore respecting Gauss' Law. So why do you think it is still relevant? And what have you got against Gauss' law? Originally Posted by Michael Mozina You're playing with fire at this point IMO. I've even taken the time to show you WHICH equations were involved in Priests "magic" where he turns a E field source into a B field "source" by turning electrons into "magnetic charges"! Come on! What's it going to take? And I have taken the time (post 5059 & post 5068) to show you that it is all entirely irrelevant to any real discussion of magnetic reconnection. Come on! What's it going to take? __________________ The point of philosophy is to start with something so simple as not to seem worth stating, and to end with something so paradoxical that no one will believe it. -- Bertrand Russell 20th November 2011, 07:30 AM #5077 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson It certainly involves currents, but for the most part "reconnection" is a pretty lousy description of what the currents physically do. They accelerate. No, magnetic reconnection is a lousy description because no B field lines "reconnect". They simple change over time (flux). Quote: They energize. Say that the process involves "current acceleration" or "current energizing" and that's fine with me. But see your own statement: The E-field comes first, and then the currents accelerate. That's just what I have been saying all along. The electric field that accelerates the charged particles comes into existence as a result of the "reconnection" of the magnetic field. It's called "magnetic reconnection" because that reconfiguration of the magnetic field is the local energy source for everything else. It's BASIC INDUCTION Tim! There's nothing "unique" about MR theory. It's a pure acceleration process of charged particles do to FLUX CHANGES, nothing more! Quote: The first 4 are obviously true. The last one about a near-zero electric field probably is, at least in principle. But I would want to look at Dungey's paper again to make sure there is a specific compatibility with that specific paper. Some of these experiments are performed with external applied fields in addition to the fields generated in the experiment. That's just pure BS. Without plasma, Priest cannot do that manipulation with electrons at equation 17 Tim. Without "current", none of the "electrical discharge" processes work at all! No "acceleration". No "reconnection". All that occurs in the vacuum is FLUX CHANGES OVER TIME! Honestly Tim, I have no idea why you stepped back into this mess. You're only going to make yourself look bad. Tell me how you expect to get an "electrical discharge" in step four with no plasma? Last edited by Michael Mozina; 20th November 2011 at 07:32 AM. 20th November 2011, 07:39 AM #5078 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson No sources & no sinks, I agree. But I already said that, you know, ... Then no "beginning or ending" either Tim. The CURRENT is being redirected. The CHARGE transfer occurs via a CHARGED PARTICLE, not the magnetic field. Equation 17 is a CONVERSION related to the CURRENT flow. The SOURCE in equation 16 is the E field and the SINK is the E field. The CURRENT simply transfers FLUX to a specific location which they then convert to "magnetic charge". Quote: So why do you think it is still relevant? And what have you got against Gauss' law? The better question is what YOU have got against it? If X isn't s source or a sink for B field lines, then it's not the "beginning or the end" of them either! Man are you doing mental gymnastics to try to waltz around that issue. Quote: And I have taken the time (post 5059 & post 5068) to show you that it is all entirely irrelevant to any real discussion of magnetic reconnection. Come on! What's it going to take? This is just pure denial now. I showed you CONCLUSIVELY that X is not a "source" of anything related to the MAGNETIC field. It's just a PASSING THROUGH point for CURRENT. That CURRENT is then being CONVERTED by Priest at equation 17 into "magnetic charge", AKA MONOPOLES! This whole conversation is silly Tim. You NEVER should have gotten back into this conversation. You're only going to come out looking silly unless you have a monopole in your pocket. Unless you've got one, no B lines "reconnect". They don't "begin". They don't "end". They have no source and no sink! Any other mental gymnastics you try to do are only going to reinforce these FACTS! You will NOT get an electrical discharge in vacuum and therefore the PROCESS of "reconnection" cannot be achieved in a vacuum. PERIOD! Get REAL! Last edited by Michael Mozina; 20th November 2011 at 07:41 AM. 20th November 2011, 07:45 AM #5079 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson The first 4 are obviously true. Boloney. Only 1 is "true". The rest are pure BS. "Reconnection" is a PROCESS in PLASMA. It doesn't happen in a "vacuum". Without current flow, no electron/monopole conversion is possible between equations 16 and 17 Tim. You NEED plasma and CURRENT! They are not optional. If equation 16 is ZERO, equation 17 is zero too Tim. Last edited by Michael Mozina; 20th November 2011 at 07:46 AM. 20th November 2011, 08:27 AM #5080 Michael Mozina Banned   Join Date: Feb 2009 Posts: 9,361 Originally Posted by Tim Thompson Physically: Nothing "reconnects", FYI, that is the whole point Tim. NOTHING physical ever could "reconnect" in a vacuum. No particle acceleration is possible. No discharge can occur. Quote: Mathematically: Field lines are the mathematical tool of choice to describe and analyze any field, ever since Maxwell. Mathematically, the lines of force representing the physical magnetic field literally reconnect, resulting in a change in the topology of the field. The name, "magnetic reconnection", comes from this mathematical reconnection of field lines. That's what I, personally think. Mathematically speaking, Priest specifically converts current into "magnetic charge" between equations 16 and 17. If you don't have CURRENT at that location Tim, you'll get no magnetic charge in equation 17. I'm going to keep harping on those two equations (16&17) from the source and sink paper by Priest until you admit that you NEED current and plasma to get "reconnection" Tim. Last edited by Michael Mozina; 20th November 2011 at 08:29 AM. International Skeptics Forum Bookmarks Digg del.icio.us StumbleUpon Google Reddit
# Can a person at a photon sphere of a black hole decide where the black hole is? A person at the photon sphere of a black hole will observe the following: • The black hole surface will cover exactly half of the visible sky (say, the left half), and cosmic horizon will cover the other half. • Looking in any direction between these hemispheres the person will see oneself. • The person will observe no centrifugal force however fast he moves. Any object the person thowing to the left will experience the centrifugal force pulling it further left, while any object thrown to the right also will experience the centrifugal force pulling it to the right. Approaching the black hole in such circumstances may look quite like actually escaping it to the crew. • The space to the right as well the space to the left seemingly includes objects of any length and dimensions, the further objects are, the more they are time-dilated and length-contracted. Is there any way for a spacecraft commander to decide (by an experiment conducted in a small$^1$ amount of time) which hemisphere is the sky and which hemisphere is black hole in such circumstances? -- $^1$ Here small means small compared to characteristic time scales of the problem. - What about tidal forces? Of course you can make those negligible by choosing a suitably small observer or a suitably large black hole. – rob May 5 '14 at 14:10 I don't see why the "black hole surface will cover exactly half of the visible sky". – DavePhD May 5 '14 at 14:12 @rob tidal forces also will be symmetric. – Anixx May 5 '14 at 14:13 @DavePhD because all light rays coming from the left are originated on the BH surface (if not on the closer objects) while all light rays from the right hemisphere come from the cosmic horizon. If the observer is closer to the BH than the photon sphere, the BH will cover greater part of the sky. At BH surface (event horiizon) the BH covers all sky. – Anixx May 5 '14 at 14:16 The orbit at $1.5r_s$ is only possible for massless objects: the closest orbit possible for massive objects is $3r_s$. So if you are in free fall simple wait a (very) short time and you'll hit the singularity and the question will be decided for you. If you are not in free fall, e.g. you are firing some form of rocket motor to hover at $1.5r_s$, then the situation is asymmetric because you will need to fire your motor towards the black hole. Either way, it will be obvious which side is which. - hmm can you please provide any link about such minimum orbit for massive objects and a proof that it is so different? – Anixx May 5 '14 at 15:27 Also note the following: any object (including massive) that crosses the photon sphere from inside to outside will escape to infinity without firing any engines. To me that indicates that in a limit of very slow crossing speed, the escape to infinity time will take a lot of time. – Anixx May 5 '14 at 15:31 @Anixx PDF: eagle.phys.utk.edu/guidry/astro616/lectures/lecture_ch18.pdf discusses both particle orbits and photon orbits in Schwarzschild metric, and I think also some discussion of Kerr metric further down. Re second comment: in this case it will take a long time to get to infinity, yes, but the initial speed is quite high, so it won't take very long to get "far away". – Kyle Oman May 5 '14 at 16:00 @Kyle an object that crosses the photon sphere from inside at ANY speed will escape to infinity. Even if the speed is arbitrary small. – Anixx May 5 '14 at 16:05 @Anixx the question is how do you get to move that way. If you are falling in from the outside, you are doomed to the singularity; if you have rockets, you have to fire them towards the singularity (and burn a lot of fuel) to get to cross the boundary. – Davidmh May 5 '14 at 16:24
Convergence articles Displaying 81 - 90 of 659 A sketch of the history of topology, beginning with the polyhedron formula, but continuing up to the present. Having been given the sum of two numbers,a, and the difference of their squares,b, find the numbers. A man bought a number of sheep for $225; 10 of them having died, he sold 4/5 of the remainder for the same cost and received$150 for them. How many did he buy? Posters illustrating mathematics concepts in such places as China, Japan, India, and the Americas. A circle is inscribed in an isosceles trapezoid. Find the relationship of the radius to the sides. Two travelers, starting at the same time from the same point, travel in opposite directions round a circular railway. The author became fascinated with the mysterious role of √-1 in electronics. Turning to history for insight, he brings the perspectives of both a practical man and a scholar in conveying the enchantment of mathematics for a non-mathematician. A math history class visits the 'Beautiful Science' exhibit at the Huntington Library in Southern California. In a square box that contains 1000 marbles, how many will it take to reach across the bottom of the box in a straight row? Find the greatest value of y in the equation a4 x2= (x2 + y2)3.
# Determine the current and potential difference across each of the resistors in the circuit shown below: In addition, determine the ##### What angle does the ray refract into the water make with the normal to the surface?... What angle does the ray refract into the water make with the normal to the surface? Review A plate of glass with parallel faces having a refractive index of 1.48 is resting on the surface of water in a tank. A ray of light coming from above in air makes an angle of incidence 32.0 °with the norm... ##### 47 Which ol the following protein is essential for receptor-mediated endocytosis from plasma membrane?Rab protein Dynamin CoPII COPIClathrin 47 Which ol the following protein is essential for receptor-mediated endocytosis from plasma membrane? Rab protein Dynamin CoPII COPI Clathrin... ##### Given v - it and w-i+j (a) find the dot product v.w (b) find the angle... Given v - it and w-i+j (a) find the dot product v.w (b) find the angle between V and w (c) state whether the vectors are parallel, orthogonal, or neither (a) VW (b) What is the angle between vand w? (Do not round until the final answer. Then round to the nearest tonth as needed.) (c) Are vectors v a... ##### (a) In what order do you expect the following species to be eluted in normal-phase LC?... (a) In what order do you expect the following species to be eluted in normal-phase LC? Indicate with numbers and provide brief rationale: 0₂N NO₂ NO2 trinitrotoluene 1-heptene heptane heptane phenol toluene (b) Fraction of which component of the mobile phase should be increased when a he... ##### Question 51ptsMany CRE have become resistant by producing an enzyme cabable of hydrolizing (breaking apart with water) the antibiotic that was intended to target them: This specific enzyme is called?carbapenemaseenterobacteriaseenzymasecarbolfushionase Question 5 1pts Many CRE have become resistant by producing an enzyme cabable of hydrolizing (breaking apart with water) the antibiotic that was intended to target them: This specific enzyme is called? carbapenemase enterobacteriase enzymase carbolfushionase... ##### Nultiple choice Classify the following reaction as an oxidation reaction, a reduction reaction, or neither. Oxidation... nultiple choice Classify the following reaction as an oxidation reaction, a reduction reaction, or neither. Oxidation Reaction Reduction Reaction This is neither an oxidation reaction nor a reduction reaction. Click if you would like to Show Work for this questioni Open Show Work... ##### Culver Mining Company recently purchased a quartz mine that it intends to work for the next... Culver Mining Company recently purchased a quartz mine that it intends to work for the next 10 years. According to state environmental laws, Culver must restore the mine site to its original natural prairie state after it ceases mining operations at the site. To properly account for the mine, Culver... ##### Buid paneraling function for % In tha lollowing procedure: You 70i Deed calculate Ihe coeficient distindt dice ara rolled? 9)= (1++22,_+615 9=6+2+. 0+6+, 961 " 622 . 12655ways can wC get & sum 0f r when 5 Buid paneraling function for % In tha lollowing procedure: You 70i Deed calculate Ihe coeficient distindt dice ara rolled? 9)= (1++22,_+615 9=6+2+. 0+6+, 961 " 622 . 12655 ways can wC get & sum 0f r when 5... ##### A rectangular yard is to be comoletely enclosed Dy fencing and tnen civided into three enclosures of equa area by fences parallel to and of tne same lengtn as one side of tne yard' If 400 ft, of fencing available_ find maximum area cftne rectangular yard? A rectangular yard is to be comoletely enclosed Dy fencing and tnen civided into three enclosures of equa area by fences parallel to and of tne same lengtn as one side of tne yard' If 400 ft, of fencing available_ find maximum area cftne rectangular yard?... ##### Ingedit (-Scoscan) sex = 7 Select one: TT a. wla TT 3 cos(f)dt 0 O b.... ingedit (-Scoscan) sex = 7 Select one: TT a. wla TT 3 cos(f)dt 0 O b. Z TT 2 cos(t)dt 21 C. 3 cos(t)dt TT d. 3 л 2 Sv2 cos(?)dt... ##### A hospital wants to assess the relative diagnostic performanceof its four mammography imaging systems. Six radiologists—threeexperts and three in training—participated in an ROC study. Eachradiologist read a set of truth-ascertained images from eachcamera, evenly divided between diseased and normal cases. An areaunder the ROC curve was computed for each radiologist and camera,as indicated in the table.Camera1234Experts0.9780.7870.7280.9510.8760.8210.7540.8600.8590.8630.8840.905Traniees0.8020 A hospital wants to assess the relative diagnostic performance of its four mammography imaging systems. Six radiologists—three experts and three in training—participated in an ROC study. Each radiologist read a set of truth-ascertained images from each camera, evenly divided between di... ##### 71mL of 6M HCl is on hand, but need to 3M HCI for a lab.What is the final volume of the 3M solution after the 6M solution is diluted?Your ansvLr 71mL of 6M HCl is on hand, but need to 3M HCI for a lab.What is the final volume of the 3M solution after the 6M solution is diluted? Your ansvLr... ##### When heated, KCIO3 decomposes into KCI and 02 2KCIO, 2KCI + 301 If this reaction produced 83.9 g of KCI; how much Oz was produced (in grams)? Number02 When heated, KCIO3 decomposes into KCI and 02 2KCIO, 2KCI + 301 If this reaction produced 83.9 g of KCI; how much Oz was produced (in grams)? Number 02... ##### Problem 7) ^ hospital admits four Find the probabilities of patients suffering the following for a disease for which the mortality rate is 80%_ outcomes: (a) None of the patients survives (3 pt) (b) Exactly one survives (3 pt)(c) Two Or more survive. (4 pt) Problem 7) ^ hospital admits four Find the probabilities of patients suffering the following for a disease for which the mortality rate is 80%_ outcomes: (a) None of the patients survives (3 pt) (b) Exactly one survives (3 pt) (c) Two Or more survive. (4 pt)... ##### Use 9 point B of the frictionless inclined plane as showmn In noueid Dauuzui Dftne 1 What Ia the 1 Woli 04€*Kl[0 [aiqo uB Lsno 1 0l Del drl Oldcclnanan 1 1 If the Ag zs/w ANSWER: 88.30 Problem 1 10.6 m Use 9 point B of the frictionless inclined plane as showmn In noueid Dauuzui Dftne 1 What Ia the 1 Woli 04€*Kl[0 [aiqo uB Lsno 1 0l Del drl Oldcclnanan 1 1 If the Ag zs/w ANSWER: 8 8.30 Problem 1 1 0.6 m... ##### ~particle starts moving along the & axis at time t = ( seconds. The acceleration of the particle after seconds (t 2 0) is given by a(t) = 2(1 +). Initially (at time t = 0 seconds) , the particle is at position € = 2 and its velocity is 3 units per second. Find the largest ,-value that the particle travels to ~particle starts moving along the & axis at time t = ( seconds. The acceleration of the particle after seconds (t 2 0) is given by a(t) = 2(1 +). Initially (at time t = 0 seconds) , the particle is at position € = 2 and its velocity is 3 units per second. Find the largest ,-value that the ... ##### Vurlical Giection Growm 0186 [LSuve (nc comimon Tnc durcitijns tre (Nne7usr0 s ccnecicd Cotmon J75 Focunt (roN felt anto tha first dick niih intensity TNfee polanzing Patcs Knose planes are thc vertkal reterence directon polanzatian paralle Wineany pelamzcdn4m 0flight with E anc Pale (cjcalcc aldalos Inenauetlor Wanenincdintert [email protected] (sijilrnh Calculatc vurlical Giection Growm 0186 [LSuve (nc comimon Tnc durcitijns tre (Nne7usr0 s ccnecicd Cotmon J75 Focunt (roN felt anto tha first dick niih intensity TNfee polanzing Patcs Knose planes are thc vertkal reterence directon polanzatian paralle Wineany pelamzcdn4m 0flight with E anc Pale (cjcalcc aldalo... ##### Rewrite the expression using radical notation; then simplify the radical expression. (-125)^2/3 + (16)^2/3 Rewrite the expression using radical notation; then simplify the radical expression. (-125)^2/3 + (16)^2/3... ##### Goal keeper wants to clear a soccer ball from his end of a 120 m field He is standing 10 m fipm his goal line and estimates that he will need to kick the ball at least 75 m to clear the field. He chooses to kick the ball with an initial velocity of 30 m/s and with a projection angle of 459. Does he successfully clear the ball, and if s0, by how much? Also, assuming another teammate receives the ball, what is the remaining distance they will have to kick the ball to score a goal? goal keeper wants to clear a soccer ball from his end of a 120 m field He is standing 10 m fipm his goal line and estimates that he will need to kick the ball at least 75 m to clear the field. He chooses to kick the ball with an initial velocity of 30 m/s and with a projection angle of 459. Does he ... ##### How do you think the moderator might change the relationship? How do you think the moderator might change the relationship?... ##### QUESTIONS 2 points Which of the town would be considered an actual cost of a current... QUESTIONS 2 points Which of the town would be considered an actual cost of a current period The 25 of materials in a manufactured chair that is ready to be shipped to the customer The 5:22 of direct material cost per unit assumed in the actual budget of a manufacturer of chairs The expected cost of ... ##### The table below shows te rests of a survey in which 141 men and 145 women... The table below shows te rests of a survey in which 141 men and 145 women workers ages 2S to 64 were asked reey have at ๒astore more's income set aside fr emerge eies Con tepats (a)thvough (d) Less than one months income One month's income or more 76 Total Men Women Total 65 84 149 611... ##### Describe the Confidence Interval 1. In two to three complete sentences, explain what a confidence interval... Describe the Confidence Interval 1. In two to three complete sentences, explain what a confidence interval means (in general), as though you were talking to someone who has not taken statistics. 2. In one to two complete sentences, explain what this confidence interval means for this particular stud... ##### State Question Add file whether the following are Dz D1 2000 substitutes 3000 100p1 Td 100 7 complementary 500 3 items based on points State Question Add file whether the following are Dz D1 2000 substitutes 3000 100p1 Td 100 7 complementary 500 3 items based on points... ##### Consider the force given byF = (x2 yz )i- Zxyja) Mathematically determine if the force is conservative 6) If it is conservative; find the associated potential energy- Sketch plot of V(x) indicating the turning points Indicate any stable and unstable equilibrium points Consider the force given by F = (x2 yz )i- Zxyj a) Mathematically determine if the force is conservative 6) If it is conservative; find the associated potential energy- Sketch plot of V(x) indicating the turning points Indicate any stable and unstable equilibrium points... ##### Find the value for $(a) x=2$ and $y=1$ and $(b) x=1$ and $y=5 .$ See Example 2$2(2 x+y)$ Find the value for $(a) x=2$ and $y=1$ and $(b) x=1$ and $y=5 .$ See Example 2 $2(2 x+y)$...
# 6. Improve Cache Efficiency by Blocking¶ Open the notebook in Colab In Section 5 we saw that properly reordering the loop axes to get more friendly memory access pattern, together with thread-level parallelization, could dramatically improve the performance for matrix multiplication. The results show that for small-scale matrices, our performance outperforms the NumPy baseline. However, for large matrices, we need to carefully consider the cache hierarchy discussed in Section 1. %matplotlib inline import tvm from tvm import te import numpy as np import d2ltvm target = 'llvm -mcpu=skylake-avx512' Before we started, let’s rerun the benchmark for NumPy as our baseline. sizes = 2**np.arange(5, 12, 1) exe_times = [d2ltvm.bench_workload(d2ltvm.np_matmul_timer(n)) for n in sizes] np_gflops = 2 * sizes **3 / 1e9 / np.array(exe_times) ## 6.1. Blocked Matrix Multiplication¶ One commonly used strategy is tiling matrices into small blocks that can be fitted into the cache. The math behind it is that a block of $$C$$, e.g. C[x:x+tx, y:y+ty] by the NumPy notation, can be computed by the corresponding rows of $$A$$ and columns of $$B$$. That is C[x:x+tx, y:y+ty] = np.dot(A[x:x+tx,:], B[:,y:y+ty]) We can further decompose this matrix multiplication into multiple small ones C[x:x+tx, y:y+ty] = sum(np.dot(A[x:x+tx,k:k+tk], B[k:k+tk,y:y+ty]) for k in range(0,n,tk)) This computation is also illustrated in Fig. 6.1.1. Fig. 6.1.1 Blocked tiling for matrix multiplication. In each submatrix computation, we need to write a [tx, ty] shape matrix, and reach two matrices with shapes [tx, tk] and [tk, ty]. We can compute such a computation in a single CPU core. If we properly choose the tiling sizes tx, ty and tk to fit into the L1 cache, which is 32KB for our CPU (refer to Section 1). The reduced cache miss then should improve the performance. Let’s implement this idea. In the following code block, we choose tx=ty=32 and tk=4 so that the submatrix to write has a size of 32*32*4=4KB and the total size of the two submatrices to read is 2*32*4*4=1KB. The three matrices together can fit into our L1 cache easily. The tiling is implemented by the tile primitive. After tiling, we merge the outer width and height axes into a single one using the fuse primitive, so we can parallelize it. It means that we will compute blocks in parallel. Within a block, we split the reduced axis, reorder the axes as we did in:numref:ch_matmul_cpu, and then vectorize the innermost axis using SIMD instructions, and unroll the second innermost axis using the unroll primitive, namely the inner reduction axis. tx, ty, tk = 32, 32, 4 # tile sizes def block(n): A, B, C = d2ltvm.matmul(n, n, n) s = te.create_schedule(C.op) # Tile by blocks, and then parallelize the computation of each block xo, yo, xi, yi = s[C].tile(*C.op.axis, tx, ty) xy = s[C].fuse(xo, yo) s[C].parallel(xy) # Optimize the computation of each block ko, ki = s[C].split(s[C].op.reduce_axis[0], factor=tk) s[C].reorder(ko, xi, ki, yi) s[C].vectorize(yi) s[C].unroll(ki) return s, (A, B, C) s, (A, B, C) = block(64) print(tvm.lower(s, [A, B, C], simple_mode=True)) produce C { parallel (x.outer.y.outer.fused, 0, 4) { for (x.inner.init, 0, 32) { C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner.init*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] = x32(0f) } for (k.outer, 0, 16) { for (x.inner, 0, 32) { C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] = (C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] + (x32(A[(((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (k.outer*4))])*B[ramp(((k.outer*256) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)])) C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] = (C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (k.outer*4)) + 1)])*B[ramp((((k.outer*256) + (floormod(x.outer.y.outer.fused, 2)*32)) + 64), 1, 32)])) C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] = (C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (k.outer*4)) + 2)])*B[ramp((((k.outer*256) + (floormod(x.outer.y.outer.fused, 2)*32)) + 128), 1, 32)])) C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] = (C[ramp((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (floormod(x.outer.y.outer.fused, 2)*32)), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 2)*2048) + (x.inner*64)) + (k.outer*4)) + 3)])*B[ramp((((k.outer*256) + (floormod(x.outer.y.outer.fused, 2)*32)) + 192), 1, 32)])) } } } } From the generated C-like codes, we can see that parallel is placed on the x.outer, i.e. xo, axis. The vectorization translated the axis yi, whose length is 32, into ramp with a stride 1 and width 32. Besides, the axis ki is also unrolled into 4 sequential operations to reduce the cost of the for-loop. blocked_gflops = d2ltvm.bench_matmul_tvm(block, sizes, target) d2ltvm.plot_gflops(sizes, [np_gflops, blocked_gflops], ['numpy', 'block']) The benchmark results show that our program is as good as NumPy for small matrices, but still doesn’t do well for large ones. One major reason is because both read and write of these submatrices are not continuous after tiling. ## 6.2. Write Cache¶ The non-continuous write issue is severer than the non-continuous read. This is because we read once of each submatrix of A and B, but need to write by n times for the submatrix of C. In the following code block, we first write the results into a local buffer for each submatrix computation, and then write them back to C. It can be done by the cache_write method. We specify the buffer being used for each block by placing it within the yo axis using the compute_at primitive. The rest optimization is the same as before, but note that we need to use s[Cached] instead of s[C] to optimize the submatrix computation. def cached_block(n): A, B, C = d2ltvm.matmul(n, n, n) s = te.create_schedule(C.op) # Create a write cache for C CachedC = s.cache_write(C, 'local') # Same as before, first tile by blocks, and then parallelize the # computation of each block xo, yo, xi, yi = s[C].tile(*C.op.axis, tx, ty) xy = s[C].fuse(xo, yo) s[C].parallel(xy) # Use the write cache for the output of the xy axis, namely a block. s[CachedC].compute_at(s[C], xy) # Same as before to optimize the computation of a block . xc, yc = s[CachedC].op.axis ko, ki = s[CachedC].split(CachedC.op.reduce_axis[0], factor=tk) s[CachedC].reorder(ko, xc, ki, yc) s[CachedC].unroll(ki) s[CachedC].vectorize(yc) return s, (A, B, C) s, (A, B, C) = cached_block(512) print(tvm.lower(s, [A, B, C], simple_mode=True)) produce C { parallel (x.outer.y.outer.fused, 0, 256) { // attr [C.local] storage_scope = "local" allocate C.local[float32 * 1024] produce C.local { for (x.c.init, 0, 32) { C.local[ramp((x.c.init*32), 1, 32)] = x32(0f) } for (k.outer, 0, 128) { for (x.c, 0, 32) { C.local[ramp((x.c*32), 1, 32)] = (C.local[ramp((x.c*32), 1, 32)] + (x32(A[(((floordiv(x.outer.y.outer.fused, 16)*16384) + (x.c*512)) + (k.outer*4))])*B[ramp(((k.outer*2048) + (floormod(x.outer.y.outer.fused, 16)*32)), 1, 32)])) C.local[ramp((x.c*32), 1, 32)] = (C.local[ramp((x.c*32), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 16)*16384) + (x.c*512)) + (k.outer*4)) + 1)])*B[ramp((((k.outer*2048) + (floormod(x.outer.y.outer.fused, 16)*32)) + 512), 1, 32)])) C.local[ramp((x.c*32), 1, 32)] = (C.local[ramp((x.c*32), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 16)*16384) + (x.c*512)) + (k.outer*4)) + 2)])*B[ramp((((k.outer*2048) + (floormod(x.outer.y.outer.fused, 16)*32)) + 1024), 1, 32)])) C.local[ramp((x.c*32), 1, 32)] = (C.local[ramp((x.c*32), 1, 32)] + (x32(A[((((floordiv(x.outer.y.outer.fused, 16)*16384) + (x.c*512)) + (k.outer*4)) + 3)])*B[ramp((((k.outer*2048) + (floormod(x.outer.y.outer.fused, 16)*32)) + 1536), 1, 32)])) } } } for (x.inner, 0, 32) { for (y.inner, 0, 32) { C[((((floordiv(x.outer.y.outer.fused, 16)*16384) + (x.inner*512)) + (floormod(x.outer.y.outer.fused, 16)*32)) + y.inner)] = C.local[((x.inner*32) + y.inner)] } } } } Note from the generated pseudo codes that we initialize C.local within the yo axis, and the size of C.local is tx * ty = 1024. cached_gflops = d2ltvm.bench_matmul_tvm(cached_block, sizes, target) d2ltvm.plot_gflops(sizes, [np_gflops, blocked_gflops, cached_gflops], ['numpy', 'block', '+cache']) We can see the the write cache improves the performance for matrix multiplication on large sizes. ## 6.3. Summary¶ • Blocked tiling improves cache efficiency for matrix multiplication. • Data to be frequently read and written should be placed in a buffer explicitly to reduce cache misses. ## 6.4. Exercises¶ 1. Try different hyperparameters for tx, ty and tx. 2. Try different axis orders. 3. Benchmark on larger matrices, observe if there is still performance gap between NumPy. If so, try to explain the reason. 4. Evaluate the correctness of the computed results.
# Predicting binary target value based on unlabelled features I have a dataset of around 15 stocks having the following data format : TimeStamp | F1 | F2 | F3 | ...Fn | Y Fi : A random feature(numeric) which contains some information about the target Variable Y: A binary 0/1 target variable to predict. The time column is meta data, not a feature. All other columns are features. The dataset is sufficiently large. I need to make a predictive model for predicting binary values using the random unlabelled numeric features( 71 total features) . Can you help me how to proceed with this problem ? You have lots of choices here. You did not say whether the Fi are categorical or numeric. If numeric, logistic regression is a simple and obvious place to start. Logistic regression also works with categorical variables, but best if the number of such variables is not large and the number of categories is small. The logistic regression will let you know if anything is going on at all, but it may not give you the best predictive power. For a small number of categorical variables, I would try a log-linear model. Assuming more variables: If categorical, then I would try a decision tree. Decision trees also work with numerical variables or both -- so it's a good second choice either way, but more suited to categoricals. If you are getting nice results from the tree and the number of variables is large, I would then move to a random forest. These don't add much, in my experience, when the number of variables is small (say 30 or less), but they can greatly improve the stability and predictive power of your model when you have a large number of variables. If the variables are numeric, a support vector machine, or a Gaussian classification model, would improve on your logistic regression. Note. I have a statistics background, rather than a machine learning background, so I tend not to take my explanatory variables at face value. It's nice to get the children off the street, so if the number of variables is not super huge, I might try a lasso regression to see if I can drop any of the variables. If a number of your explanatory variables do not actually give much information about the target, they will basically add noise and instability to your model. But I'm old fashioned. A lot of people just throw everything in the hopper and leave it at that. • The features are numeric but they are unlabelled and there are 71 features. How to correlate the prediction(binary value (0/1)) with a function of the features(F1,F2,..,F71) ? Would logistic regression be a good choice in this case ? Sep 5 '18 at 13:49
# elatex ETEX(1) ETEX(1) ## NAME etex, einitex, evirtex - extended TeX ## SYNOPSIS etex [options] [commands] ## DESCRIPTION This manual page is not meant to be exhaustive. The complete documen- tation for this version of TeX can be found in the info file or manual Web2C: A TeX implementation. e-TeX is the first concrete result of an international research & development project, the NTS Project, which was established under the aegis of DANTE e.V. during 1992. The aims of the project are to perpet- uate and develop the spirit and philosophy of TeX, whilst respecting Knuth’s wish that TeX should remain frozen. e-TeX can be used in two different modes: in compatibility mode it is supposed to be completely interchangable with standard TeX. In extended mode several new primitives are added that facilitate (among other things) bidirectional typesetting. An extended mode format is generated by prefixing the name of the source file for the format with an asterisk (*). Such formats are often prefixed with an ‘e’, hence etex as the extended version of tex and elatex as the extended version of latex. However, eplain is an exception to this rule. The einitex and evirtex commands are e-TeX’s analogues to the initex and virtex commands. In this installation, they are symlinks to the etex executable. e-TeX’s handling of its command-line arguments is similar to that of TeX. ## OPTIONS This version of e-TeX understands the following command line options. --efmt format Use format as the name of the format to be used, instead of the name by which e-TeX was called or a %& line. --file-line-error-style Print error messages in the form file:line:error which is simi- lar to the way many compilers format them. --help Print help message and exit. --ini Be einitex, for dumping formats; this is implicitly true if the program is called as einitex. --interaction mode Sets the interaction mode. The mode can be one of batchmode, nonstopmode, scrollmode, and errorstopmode. The meaning of these modes is the same as that of the corresponding \commands. --ipc Send DVI output to a socket as well as the usual output file. Whether this option is available is the choice of the installer. --ipc-start As --ipc, and starts the server at the other end as well. Whether this option is available is the choice of the installer. --jobname name Use name for the job name, instead of deriving it from the name of the input file. Sets path searching debugging flags according to the bitmask. See the Kpathsea manual for details. --maketex fmt Enable mktexfmt, where fmt must be one of tex or tfm. --mltex Enable MLTeX extensions. --no-maketex fmt Disable mktexfmt, where fmt must be one of tex or tfm. --output-comment string Use string for the DVI file comment instead of the date. --parse-first-line If the first line of the main input file begins with %& parse it to look for a dump name or a --translate-file option. --progname name Pretend to be program name. This affects both the format used and the search paths. --recorder Enable the filename recorder. This leaves a trace of the files opened for input and output in a file with extension .fls. --shell-escape Enable the \write18{command} construct. The command can be any Bourne shell command. This construct is normally disallowed for security reasons. --translate-file tcxname Use the tcxname translation table. --version Print version information and exit. ## ENVIRONMENT See the Kpathsearch library documentation (the ‘Path specifications’ node) for precise details of how the environment variables are used. The kpsewhich utility can be used to query the values of the variables. One caveat: In most e-TeX formats, you cannot use ~ in a filename you give directly to e-TeX, because ~ is an active character, and hence is expanded, not taken as part of the filename. Other programs, such as Metafont, do not have this problem. TEXMFOUTPUT Normally, e-TeX puts its output files in the current directory. If any output file cannot be opened there, it tries to open it in the directory specified in the environment variable TEXMFOUT- PUT. There is no default value for that variable. For example, if you say tex paper and the current directory is not writable, if TEXMFOUTPUT has the value /tmp, e-TeX attempts to create /tmp/paper.log (and /tmp/paper.dvi, if any output is produced.) TEXINPUTS Search path for \input and \openin files. This should probably files. An empty path component will be replaced with the paths defined in the texmf.cnf file. For example, set TEXINPUTS to ".:/home/usr/tex:" to prepend the current direcory and ‘‘/home/user/tex’’ to the standard search path. TEXFONTS Search path for font metric (.tfm) files. TEXFORMATS Search path for format files. TEXPOOL search path for einitex internal strings. TEXEDIT Command template for switching to editor. The default, usually vi, is set when e-TeX is compiled. ## FILES The location of the files mentioned below varies from system to system. Use the kpsewhich utility to find their locations. etex.pool Encoded text of e-TeX’s messages. texfonts.map Filename mapping definitions. *.tfm Metric files for e-TeX’s fonts. *.efmt Predigested e-TeX format (.efmt) files. ## BUGS This version of e-TeX implements a number of optional extensions. In fact, many of these extensions conflict to a greater or lesser extent with the definition of e-TeX. When such extensions are enabled, the banner printed when e-TeX starts is changed to print e-TeXk instead of e-TeX. This version of e-TeX fails to trap arithmetic overflow when dimensions are added or subtracted. Cases where this occurs are rare, but when it does the generated DVI file will be invalid.
# http://colinraffel.com/wiki/ ### Site Tools elen_e6892_course_notes # ELEN E6892 Course Notes ## Probability Basics Given a space $\Omega$ containing points partitioned into different regions $A$ and $B$, the probability that a point drawn at random comes from $A$ can be computed by the number of points in $A$ divided by the total number of points in $\Omega$. The probability that a point falls in $A$ given that it falls in $B$ is the number of points that fall in both $A$ and $B$ divided by the total number of points in $B$ or $P(x \in A | x \in B) = P(x \in A, x \in B)/P(x \in B)$. It follows that $P(A, B) = P(A|B)P(B)$ for two events $A$ and $B$ (the product rule). By symmetry, we have $P(B, A) = P(A, B) = P(B | A)P(A)$ which leads to Bayes' rule $$P(A | B) = \frac{P(B | A)P(A)}{P(B)}$$ The term $P(B)$ is essentially a normalizer because $P(A|B)$ is a function of $A$, so we can instead simply write $P(A|B) \propto P(B | A)P(A)$. $P(A|B)$ is the conditional probability - the probability that $A$ will happen given that $B$ has happened. The joint probability $P(A, B)$ is the probability that both $A$ and $B$ will happen. $P(B)$ can either be called the probability of $B$ or the marginal (or prior) probability of $B$, where the latter indicates that there's some other event that we are marginalizing out. In general, the marginal probability is a way to compute the probability that an event will happen by summing over the probabilities that it and every other event will happen, so that $P(X) = \sum_Y P(X, Y)$ (the sum rule). ##### Example Let $a$ be a Bernoulli random variable which is $1$ when some person has some disease and $0$ when they don't, and let $b$ be a Bernoulli random variable which is $1$ when they test positive for the disease and $0$ when they don't. Let $P(b = 1 | a = 1) = P(b = 0 | a = 0) = 0.95$, and let $P(a = 1) = 0.01$. Using Bayes' rule, we have $P(a = 1|b = 1) = \frac{P(b = 1 | a = 1)P(a = 1)}{P(b = 1)} = 0.16$. Essentially, the low probability that $a = 1$ corrects the estimate and makes it lower. #### Continuous Random Variables Now, consider the case where the values of a random variable can be continuously-valued. To call a function in some space $\Omega$ a probability density, we require that $p(x) \ge 0\; \forall x \in \Omega$ and that $\int_\Omega p(x) d x = 1$ and $p(A) = \int_A p(x) d x$. Similarly to the discrete random variable case, we have $p(x | y) = p(x, y)/p(y)$ and $p(x | y)p(y) = p(x, y) = p(y | x)p(x)$ and $$p(x | y) = \frac{p(y | x)p(x)}{p(y)} = \frac{p(y | x)p(x)}{\int_{\Omega_x} p(y | x)p(x) dx}$$ #### Expectation, Variance and Covariance The expectation of some function $f(x)$ under a probability distribution $p(x)$ is computed by $\mathbb{E}[f] = \sum_x P(x)f(x)$ for discrete distributions and $\mathbb{E}[f] = \int p(x)f(x) dx$ for probability densities of continuous $x$. It can be estimated empirically by $$\mathbb{E}[f] \approx \frac{1}{N}\sum_{n = 1}^N f(x_n)$$ The variance of $f(x)$ can be computed as $\mathrm{Var}[f] = \mathbb{E}[(f(x) - \mathbb{E}[f(x)])^2]$. For two random variables $x$ and $y$ we can compute the covariance as $\mathrm{cov}[x, y] = \mathbb{E}_{x, y}[(x - \mathbb{E}[x])(y^\top - \mathbb{E}[y^\top])]$. In the special case where $f(x) = x$ we have $\mathbb{E}[f] = \int p(x)x dx$ and \begin{align*} \mathrm{Var}[x] &= \int (x - \mathbb{E}[x])^2 p(x) dx\\ &= \int x^2 p(x) dx - 2\mathbb{E}[x] \int x p(x) dx + \mathbb{E}[x]^2\\ &= \mathbb{E}[x^2] - \mathbb{E}[x]^2 \end{align*} ### Models The general modeling assumption is that the data $y$ is drawn from some distribution parameterized by $x$, and that the model parameters themselves are drawn from some other distribution so that $y \sim p(y | x)$ and $x \sim p(x)$. In this way, we have $p(x | y) \propto p(y | x)p(x)$. When constructing our model, we make explicit assumptions about $p(y | x)p(x)$ but do not know $p(x | y)$ (the probability of a parameter value given the data); we seek to be able to make decisions about parameters given our data. ##### Example: Beta-Bernoulli Let $X$ be a Bernoulli random variable with $P(X = 1) = \pi$ and $P(X = 0) = 1 - \pi$. We can then write that $P(X = x | \pi) = \pi^x (1 - \pi)^{1 - x}$. Now, we sample $X$ a number of times to obtain $X_i \sim_{iid} P(X | \pi)$. We can then compute \begin{align*} P(X_1, \ldots, X_N | \pi) &= \prod_{i = 1}^N P(X_i | \pi)\\ &= \prod_{i = 1}^N \pi^{X_i} (1 - \pi)^{1 - X_i}\\ &= \pi^{\sum X_i} (1 - \pi)^{N - \sum X_i} \end{align*} We can search for the value of $\pi$ which maximizes this quantity by computing $$\pi_{ML} = \mathrm{\arg}\max_\pi p(X_{1:N} | \pi) = \frac{1}{N} \sum_i^N X_i$$ However, we become more confident in our estimate of $\pi$ as $N$ increases. Using Bayes' rule, we can say that $P(\pi | X_{1 : N}) \propto P(X_{1 : N} | \pi)p(\pi)$. This will give a posterior probability distribution on $\pi$. If we don't have any idea of what $\pi$ should be, we can set $p(\pi)$ to be the uniform distribution over $[0, 1]$. Then we have $$P(\pi | X_{1 : N}) \propto \pi^{(\sum x_i + 1) - 1} (1 - \pi)^{(N - \sum X_i + 1) - 1} p(\pi)$$ The Beta distribution is defined as $$\mathrm{Beta}(a, b) = p(\pi) = \frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} \pi^{a - 1}(1 - \pi)^{b - 1}$$ Note that if we set $a = 1, b = 1$ then the Beta distribution is the uniform distribution. So, more generally if we have $X_i \sim_{iid} \mathrm{Bernoulli}(\pi), i = 1, \ldots, N$ and we define a prior on $\pi$ as $\pi \sim \mathrm{Beta}(a, b)$ then we have \begin{align*} P(\pi | X_{1:N}) \propto p(X_{1 : N} | \pi)p(\pi) &= \left(\pi^{\sum X_i} (1 - \pi)^{N - \sum X_i}\right) \left(\frac{\Gamma(a + b)}{\Gamma(a)\Gamma(b)} \pi^{a - 1} (1 - \pi)^{b - 1}\right)\\ &\propto \pi^{\sum X_i + a - 1} (1 - \pi)^{N - \sum X_i + b - 1}\\ &= \mathrm{Beta}(a + \sum X_i, b + N - \sum X_i) \end{align*} A property of Beta distributions is that $\mathbb{E}[\pi | X_{1:N}] = \frac{a + \sum x_i}{a + b + N}$. So, as $N$ increases, the terms $a$ and $b$ become negligible and so the expectation converges on the empirical estimate. The variance is then $$\mathrm{Var}(\pi | X_{1:N}) = \frac{(a + \sum x_i)(b + N - \sum X_i)}{(a + b + N)^2 (a + b + N - 1)}$$ which goes down like some constant divided by $N$. So the variance decreases as $N$ increases. The posterior therefore captures the uncertainty in our estimate of $\pi$. ### Exponential family distributions Most models of interest can be written in exponential family form, where $p(x | \eta) = h(x) \mathrm{exp}(\eta^\top t(x) - A(\eta))$. Each term has a name; $\eta$ is the natural parameter, $t(x)$ is the sufficient statistic, and $A(\eta)$ is the log-normalizer which ensures the whole thing sums to 1. We can compute the expectation by \begin{align*} \int p(x | \eta)dx &= 1\\ \rightarrow \nabla_\eta \int p(x | \eta)dx &= 0\\ \rightarrow \int (t(x) - \nabla_\eta A(\eta))p(x | \eta)dx &= 0\\ \rightarrow \int t(x)p(x | \eta)dx - \nabla_\eta A(\eta) \int p(x | \eta)dx &= 0\\ \rightarrow \mathbb{E}[t(x)] - \nabla_\eta A(\eta) \int p(x | \eta)dx &= 0\\ \rightarrow \mathbb{E}[t(x)] - \nabla_\eta A(\eta) \int p(x | \eta)dx &= 0\\ \rightarrow \mathbb{E}[t(x)] - \nabla_\eta A(\eta) &= 0\\ \rightarrow \mathbb{E}[t(x)] &= \nabla_\eta A(\eta) \end{align*} where the last simplification was made based on the fact that $\int p(x | \eta)dx = 1$. Similarly, we can compute \begin{align*} \nabla^2_\eta \int p(x | \eta)dx &= 0\\ \rightarrow -\nabla_\eta^2 A(\eta)\int p(x | \eta)dx - \int (t(x) - \nabla_\eta A(\eta))^2p(x | \eta)dx &= 0\\ \rightarrow -\nabla_\eta^2 A(\eta) - \int (t(x) - \nabla_\eta A(\eta))^2p(x | \eta)dx &= 0\\ \rightarrow \mathbb{E}[t(x)t(x)^\top] - \mathbb{E}[t(x)]\nabla_\eta A(\eta)^\top - \nabla_\eta A(\eta)\mathbb{E}[t(x)]^\top + \nabla_\eta A(\eta) \nabla_\eta A(\eta)^\top &= \nabla^2 A(\eta)\\ \rightarrow \mathbb{E}[t(x)t(x)^\top] - \nabla_eta A(\eta) \nabla_\eta A(\eta)^\top - \nabla_\eta A(\eta)\nabla_\eta A(\eta)^\top + \nabla_\eta A(\eta) \nabla_\eta A(\eta)^\top &= \nabla^2 A(\eta)\\ \rightarrow \mathbb{E}[t(x)t(x)^\top] - \mathbb{E}[t(x)]^2 &= \nabla_\eta^2 A(\eta)\\ \rightarrow \mathrm{Var}[t(x)] &= \nabla_\eta^2 A(\eta) \end{align*} where we made use of the relation $\mathrm{Var}[x] = \mathbb{E}[x^2] - \mathbb{E}[x]^2$. ##### Example: Bernoulli We can write $$p(x | \theta) = \pi^x (1 - \pi)^{1 - x} = \mathrm{exp}\left((x \log\left(\frac{\pi}{1 - \pi}\right) + \log(1 - \pi)\right)$$ where $\eta(\pi) = \log\left(\frac{\pi}{1 - \pi}\right)$, $t(x) = x$ and $A(\eta) = \log(1 + e^\eta)$. We can verify the expectation and variance by using the relations above and computing $$\mathbb{E}[x] = \frac{\partial A(\eta)}{\partial \eta} = \frac{e^\eta}{1 + e^\eta} = \frac{1}{1 + e^{-\eta}} = \pi$$ and $$\mathrm{Var}[x] = \frac{\partial^2 A(\eta)}{\partial \eta^2} = \left(\frac{e^\eta}{1 + e^\eta}\right)\left(1 - \frac{e^\eta}{1 + e^\eta}\right) = \pi(1 - \pi)$$ #### Conjugate prior In general, if we have data $X \sim p(X | \theta)$ and $\theta \sim p(\theta)$ then we say that the prior $p(\theta)$ is conjugate when the posterior $p(\theta | x)$ is in the same family as $p(\theta)$. Any exponential family distribution with $p(X | \eta) = h(x) \mathrm{exp}(\eta^\top t(x) - A(\eta))$ has a conjugate prior on $\eta$ given by $$p(\eta | \alpha, \nu) = f(\alpha, \nu) \mathrm{exp}(\eta^\top \alpha - \nu A(\eta))$$ where $f(\alpha, \nu)$ is a normalizing coefficient. Applying Bayes' rule gives \begin{align*} p(\eta | X_{1:N}) &\propto p(X_{1:N} | \eta) p(\eta)\\ &\propto \left( \prod_{i = 1}^N \mathrm{exp}(\eta^\top t(X_i) - A(\eta)) \right) \mathrm{exp}(\eta^\top \alpha - \nu A(\eta))\\ &\propto \mathrm{exp}\left(\eta^\top \left(\alpha + \sum_{i = 1}^N t(X_i)\right) - (\nu + N)A(\eta)\right)\\ &= p(\eta | \alpha^\prime, \nu^\prime) \end{align*} where $$\alpha^\prime = \alpha + \sum_{i = 1}^N t(X_i)$$ and $v^\prime = \nu + N$. This gives a closed-form update for the parameters of the model given new data points. ##### Example: Bernoulli From above, we have $\eta = \log\left(\frac{\pi}{1 - \pi}\right)$ and $A(\eta) = \log(1 + e^\eta)$ and so the conjugate prior is $p(\eta | \alpha, v) \propto \mathrm{exp}(\eta \alpha - v\log(1 + e^\eta))$. We then get that $$p(\pi | \alpha, v) = \mathrm{exp}\left(\alpha \log \left(\frac{\pi}{1 - \pi}\right) - v\log\left(\frac{1}{1 - \pi}\right)\right)$$ which is $\mathrm{Beta}(\alpha + 1, v - \alpha + 1)$. ### Distributions of sums of random variables If $X \sim p(X)$ and $Y \sim p(Y)$ which are independent then what can we say about $Z = X + Y$? Define the characteristic function of a random variable as $$\mathbb{E}[e^{i t X}] = \int e^{i t X}p(x) d x$$ The characteristic function of $Z$ is then given by \begin{align*} \mathbb{E}[e^{itZ}] &= \mathbb{E}[e^{it(X + Y)}]\\ &= \mathbb{E}[e^{itX}e^{itY}]\\ &= \int \int p(X, Y)e^{itX} e^{itY} dx dy\\ &= \int \int p(X)p(Y)e^{itX} e^{itY} dx dy\\ &= \left(\int p(X) e^{itX} d x\right) \left(\int p(Y) e^{itY} dy\right)\\ &= \mathbb{E}[e^{itX}] \mathbb{E}[e^{itY}] \end{align*} So the characteristic function of $Z$ is just the product of the characteristic functions for $X$ and $Y$ (when they are independent). ##### Example: Poisson-Multinomial Say we have $X_n \sim \mathrm{Poisson}(\lambda_n), n = 1, \ldots, N$ where $\mathrm{Poisson}(X_n | \lambda_n) = \frac{\lambda^{X_n} e^{-\lambda_n}}{X_n!}$ and $n \in \{0, 1, 2, \ldots \}$. Now, let $Y = \sum_n X_n$. What can we say about $p(X_1, \ldots, X_N | Y)$? Using Bayes' rule, we have \begin{align*} p(X_1, \ldots, X_N | Y) &= \frac{p(Y | X_{1 : N}) p(X_1, \ldots, X_N)}{p(Y)}\\ &= \frac{p(X_1, \ldots, X_N)}{p(Y)} \end{align*} because $p(Y | X_{1 : N}) = 1$ because $X_{1 : N}$ gives you all the information you need to determine the value of $Y$. By independence, we can say that $$\frac{p(X_1, \ldots, X_N)}{p(Y)} = \frac{\prod_{n = 1}^N p(X_n)}{p(Y)}$$ So, if $X \sim \mathrm{Poisson}(\lambda)$ then its characteristic function is \begin{align*} \mathbb{E}[e^{itX}] &= \sum_{x = 0}^\infty e^{itx} \frac{ \lambda^x e^{-\lambda}}{x!}\\ &= e^{-\lambda} \sum_{x = 0}^\infty \frac{(\lambda e^{it})^x}{x!}\\ &= e^{-\lambda}e^{-\lambda it}\\ &= e^{-\lambda(1 - e^{it})} \end{align*} Now, we have $Y = \sum_n X_n$ so \begin{align*} \mathbb{E}[e^{itY}] &= \prod_{n = 1}^N \mathbb{E}[e^{itX_n}]\\ &= e^{-\left(\sum_n \lambda_n\right)(1 - e^{it})} \end{align*} because of the uniqueness of the characteristic function, $Y \sim \mathrm{Poisson}\left(\sum_n \lambda_n\right)$. With some manipulation, we can see that \begin{align*} P(X_1, \ldots, X_N | Y) &= \frac{\prod_{n = 1}^N P(X_n)}{P(Y)}\\ &= \frac{\mathrm{exp}\left(\sum_n \lambda_n\right) \prod_{n = 1}^N \frac{\lambda_n^{X_n}}{X_n!}}{\mathrm{exp}\left(-\sum_n \lambda_n\right) \frac{\left(\sum_n\lambda_n\right)^Y}{Y!}}\\ &= \frac{Y!}{X_1! \cdots X_N!} \prod_{n = 1}^N \left(\frac{\lambda_n}{\sum_n \lambda_n} \right)^{X_n}\\ \end{align*} which is a multinomial distribution. Probabilistically speaking, drawing from $Y$ and drawing from this multinomial distribution is the same. ## Regression Given data $D = \{(x_i, y_i)\}_{i = 1}^N$ where $x_i \in \mathbb{R}^d$ and $y_i \in mathbb{R}$ the goal of regression is to model the relationship between the “input” $x$ and “output” $y$ so that given a new $x$ we can predict the corresponding response $y$. ### Linear Regression For linear regression, we assume that the output is linearly related to the input, so that $$y_i \approx f(x_i) = \omega_0 + \sum_{k = 1}^d \omega_k x_{i, k}$$ This reduces out goal to finding $\omega = [\omega_0, \ldots \omega_d]$. For convenience, we construct the matrix $X = [1, x_1^\top; 1, x_2^\top; \ldots 1, x_N^\top]$ so that $f(X) = X\omega$. #### Least Squares Solution We can find a solution for $\omega$ which minimizes the squared Euclidian distance between $X\omega$ and $y$, giving $$\omega_{LS} = \mathrm{arg}\min_\omega \|y - X\omega\|^2 = \mathrm{arg}\min_\omega (y - X\omega)^\top(y - X\omega)$$ Differentiating and setting equal to zero yields the solution $\omega_{LS} = (X^\top X)^{-1}X^\top y$. From a probabilistic perspective, using least square is implicitly assuming that the error is Gaussian noise, so that $y_i \sim \mathcal{N}(x_i^\top \omega, \sigma^2)$ or in vector form $y \sim \mathcal{N}(X\omega, \sigma^2\mathbb{I})$. This gives the likelihood $$p(y | X, \omega, \sigma^2) = (2\pi \sigma^2)^{-\frac{N}{2}} \mathrm{exp}\left(-\frac{1}{2\sigma^2}\|y - X\omega\|^2\right)$$ Differentiating the log-likelihood and setting equal to zero gives the maximum likelihood solution $\omega_{ML}$, and as expected $\omega_{ML} = \omega_{LS}$. We can now compute the expected value of $\omega_{ML}$ by $\mathbb{E}[\omega_{ML}] = (X^\top X)^{-1}X^\top \mathbb{E}[y] = (X^\top X)^{-1}X^\top X\omega = \omega$ and the covariance \begin{align*} \mathrm{Var}[\omega_{ML}] &= \mathbb{E}[(\omega_{ML} - \mathbb{E}[\omega_{ML}])(\omega_{ML} - \mathbb{E}[\omega_{ML}])^\top]\\ &= \mathbb{E}[(X^\top X)^{-1} X^\top y - (X^\top X)^{-1}X^\top X\omega)((X^\top X)^{-1} X^\top y - (X^\top X)^{-1}X^\top X\omega)^\top]\\ &= (X^\top X)^{-1} X^\top \mathbb{E}[(y - X\omega)(y - X\omega)^\top]X(X^\top X)^{-1}\\ &= (X^\top X)^{-1} X^\top \sigma^2 \mathbb{I} X(X^\top X)^{-1}\\ &= \sigma^2 (X^\top X)^{-1} X^\top X(X^\top X)^{-1}\\ &= \sigma^2 (X^\top X)^{-1} \end{align*} The matrix $X$ can be expressed by its singular value decomposition $X = V\Lambda^{1/2}Q^\top$ where $V$ and $Q^\top$ are the left and right orthonormal singular vector matrices and $\Lambda^{1/2}$ is the diagonal matrix of singular values. When the dimensions of $X$ are highly correlated, some of its singular values (entries of $\Lambda^{1/2}$) will be very small. We can express \begin{align*} \mathrm{Var}[\omega_{ML}] &= \sigma^2 ((V\Lambda^{1/2}Q^\top)^\top V\Lambda^{1/2}Q^\top)^{-1}\\ \mathrm{Var}[\omega_{ML}] &= \sigma^2 (Q\Lambda^{1/2}V^\top V\Lambda^{1/2}Q^\top)^{-1}\\ \mathrm{Var}[\omega_{ML}] &= \sigma^2 (Q\Lambda Q^\top)^{-1}\\ \mathrm{Var}[\omega_{ML}] &= \sigma^2 Q\Lambda^{-1} Q^\top\\ \end{align*} Note that when $X$ has highly correlated dimensions, it will have singular values $\Lambda$ which decay rapidly when sorted by magnitude, and so the inversion $\Lambda^{-1}$ will cause $\mathrm{Var}[\omega_{ML}]$ to have some very large values. #### Ridge Regression/MAP One solution to the highly-correlated $X$ situation is to add an $L^2$ penalty to $w$, giving the ridge regression solution $$\omega_{RR} = \mathrm{arg}\min_{\omega} \frac{1}{2\sigma^2} \|y - X\omega\|^2 + \frac{\lambda}{2} \|\omega\|^2$$ As before, we can differentiate and set equal to zero to find the solution $$\omega_{RR} = (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1} X^\top y$$ This is the same as the maximum likelihood solution except that we have added the term $\lambda \sigma^2 \mathbb{I}$ which increases the eigenvalues of $X^\top X$ when inverting. This is equivalent to putting a prior on $\omega$ such that $\omega \sim \mathcal{N}(0, \lambda^{-1}\mathbb{I})$. So, we end up finding \begin{align*} \omega_{MAP} &= \mathrm{arg}\max_\omega \log(p(\omega | D))\\ &= \mathrm{arg}\max_\omega \log(p(D|\omega)p(\omega)) + \mathrm{const}\\ &= \mathrm{arg}\max_\omega -frac{1}{2\sigma^2} \|y - X\omega\|^2 - \frac{\lambda}{2} \|\omega\|^2 + \mathrm{const}\\ &= \omega_{RR} \end{align*} So finding the maximum a posteriori estimate with a Gaussian prior on $\omega$ is the same as performing ridge regression. As before, we can compute the expected value of the solution by $$\mathbb{E}[\omega_{MAP}] = (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1} X^\top \mathbb{E}[y] = (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1} X^\top X \omega$$ so that we essentially have a biased estimate of the maximum likelihood solution. Similarly, we can compute the variance, giving \begin{align*} \mathrm{Var}(\omega_{MAP}) &= \mathbb{E}[(\omega_{MAP} - \mathbb{E}[\omega_{MAP})(\omega_{MAP} - \mathbb{E}[\omega_{MAP}])^\top]\\ &= (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1} X^\top \mathbb{E}[(y - X\omega)(y - X\omega)^\top]X(\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}\\ &= \sigma^2 (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1} X^\top X (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}\\ &= \sigma^2 (\lambda \sigma^2 \mathbb{I} + Q\Lambda Q^\top)^{-1} X^\top X (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}\\ &= \sigma^2 (Q(\lambda \sigma^2\mathbb{I})Q^\top + Q\Lambda Q^\top)^{-1} X^\top X (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}\\ &= \sigma^2 (Q(\lambda \sigma \mathbb{I} + \Lambda)Q^\top)^{-1} X^\top X (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}\\ &= \sigma^2 Q\tilde{\Lambda}^{-1}Q^\top \end{align*} where we have defined $\tilde{\Lambda}$ such that $$\tilde{\Lambda}^{-1}_{ii} = \frac{\Lambda_{ii}}{(\lambda \sigma^2 + \Lambda_{ii})^2}$$ If we have that $y \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ and $\omega \sim \mathcal{N}(0, \lambda \mathbb{I})$ then $p(\omega | D) \propto p(y | X, \omega)p(\omega)$. If we have a Gaussian prior on the mean of a Gaussian, then the posterior is still Gaussian, so we can compute the posterior as $p(\omega | D) = \mathcal{N}(\omega | \mu, \Sigma)$ where $\mu = (\lambda \sigma \mathbb{I} + X^\top X)^{-1}X^\top y$ and $\Sigma = (\lambda \mathbb{I} - \frac{1}{\sigma^2} X^\top X)^{-1}$. This can be “read off” directly from our knowledge about the form of the posterior of a Gaussian with mean with a Gaussian prior. When computing the approximation $\hat{y}_{new}$ from some $X_{new}$ which was not in $D$ using maximum likelihood, we compute $\hat{y}_{new} = x^\top_{new} \omega_{ML}$. On the other hand, with MAP or ridge-regression, we have $\hat{y}_{new} = x^\top_{new} \omega_{MAP}$. From a Bayesian standpoint, we have $$p(y_{new} | x_{new}, X, y) = \int p(y_{new} | x_{new}, \omega)p(\omega | X, y)d \omega = \mathcal{N}(y_{new} | \mu^\top x_{new}, \sigma^2 + x_{new} \Sigma x_{new})$$ by marginalizing according to our assumptions on the model. This gives us a measure of uncertainty of our estimate of $y_{new}$. In this way, if we are presented with new data, we can decide whether to measure $y_{new}$ directly depending on its amount of uncertainty; if we do measure $y_{new}$ we can use it to update the parameters of our model. However, being confident in the model doesn't necessarily mean it's a good model. #### Nonlinear regression If we don't expect $y$ to be a linear function of $x$, we can still have a model which is linear in $\omega$ but can express this relationship by instead passing the data through some nonlinearity $\phi$ to compute $X = [1, \phi(x_1)^\top; 1, \phi(x_2)^\top; \ldots 1, \phi(x_N)^\top]$. We then have exactly the same learning problem, where we are trying to learn linear weights - the data matrix $X$ has just been modified. ## Classification As with regression, we are given data $\{(x_i, y_i)\}_{i = 1}^N$ with measurements $x_i \in \mathbb{R}^d$ but instead we have $y_i \in \{0, 1\}$ which are discrete class labels. Given some $x_{new}$, we'd like to predict $y_{new}$. We can view this as a regression problem where we have latent variables (ones we don't see) $z \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ and $\omega \sim \mathcal{N}(0, \lambda^{-1}\mathbb{I})$ where $z_i \ge 0 \rightarrow y_i = 1$ and $z_1 < 0 \rightarrow y_i = 0$ ### Bayes classifier We assume that $y_i \sim_{iid} \mathrm{Bern}(\pi)$ and that $x_i | y_i \sim \mathcal{N}(\mu_{y_i}, \sigma^2)$. So, we have $p(y = 0) = 1 - \pi$ and $p(y = 1) = \pi$. The resulting label $y$ selects which of the two Gaussians we draw from to construct $x_i$. To be Bayesian, we use a prior $\pi \sim \mathrm{Beta}(a, b)$ and $\mu_i \sim \mathcal{N}(0, \nu)$, which gives the posterior \begin{align*} P(\pi, \mu_0, \mu_1 | D) &\propto p(D | \pi, \mu_0, \mu_1) p(\pi, \mu_0, \mu_1)\\ &\propto p(X_{1:N} | y_{1:N}, \pi, \mu_0, \mu_1) p(y_{1:N} | \pi, \mu_0, \mu_1) p(\pi, \mu_0, \mu_1)\\ &\propto \left( \prod_{i = 1}^N p(x_i | \mu_{y_i}) \right)\left(\prod_{i = 1}^N p(y_i | \pi) \right) p(\pi) p(\mu_1) p(\mu_0)\\ &\propto \left(p(\mu_0) \prod_{i \in A_0} p(x_i | \mu_0) \right)\left(p(\mu_1) \prod_{i \in A_1} p(x_i | \mu_i) \right) \left(p(\pi)\prod_{i = 1}^N p(y_i | \pi)\right) \end{align*} where we made the substitution $A_\ell = \{i : y_i = \ell\}$. In this case, each of these terms is independent, so we can normalize separately to find posteriors. The posteriors $p(\mu_\ell | x_{i \in A_\ell})$ is a normal distribution, and $p(\pi | y_{1:N})$ is a Beta distribution. Now, given a new $x_{N + 1}$, we'd like to predict $y_{N + 1}$. We can compute \begin{align*} p(y_{N + 1} = 1 | x_{N + 1}, D) &\propto p(x_{N + 1} | x_{i \in A_i}) p(y_{N + 1} = 1 | y_{1 : N})\\ &\propto \int_{-\infty}^\infty p(x_{N + 1} | \mu_1) p(\mu_1 | x_{i \in A_1}) d\mu_1 p(y_{N + 1} = 1 | y_{1 : N})\\ &\propto \int_{-\infty}^\infty p(x_{N + 1} | \mu_1) p(\mu_1 | x_{i \in A_1}) d\mu_1 \int_0^1 p(y_{N + 1} = 1 | \pi) p(\pi | y_{1:N})du \end{align*} Given these expressions, we can compute the marginal distributions analytically for $y_{N + 1} = 0$ and $y_{N + 1} = 1$ and normalize to get a posterior probability. ### Logistic Regression As with regression, we say $x \in \mathbb{R}^{d + 1}$ where the first dimension is set equal to $1$ for the bias term. We'd like to learn a vector of coefficients $\omega \in \mathbb{R}^{d + 1}$ to do classification. In the Bayes classifier, we have a model for the model and the data combined, so it's a generative model. Regression models are discriminative because they do not attempt to model the data. For logistic regression, we say that $y \sim \mathrm{Bern}(\sigma(x^\top \omega))$ where $\sigma(x^\top \omega) = \frac{e^{x^\top \omega}}{1 + e^{x^\top \omega}}$. With this model, $p(y = 1 | X, \omega) = \sigma(x^\top \omega)$. The logistic function maps values less than 0 towards 0 and values greater than 0 towards 1. Computing $x^\top \omega$ will produce a negative number on one side of the vector normal to $\omega$ and a positive value on the other side. Now $$P(y_1, \ldots, y_N | x_1, \ldots, x_N, \omega) = \prod_{i = 1}^N \sigma(x_i^\top \omega)^{y_i} (1 - \sigma(x_i^\top \omega)^{1 - y_i}$$ We can define a prior on the coefficient vector so that $\omega \sim \mathcal{N}(0, \lambda^{-1} \mathbb{I})$. The posterior is given by $$p(\omega | D) = \frac{\left(\prod_{i = 1}^N \sigma_i^{y_i} (1 - \sigma_i)^{1 - y_i} \right) \mathrm{exp}(-\frac{\lambda}{2} \omega^\top \omega)}{\int \left(\prod_{i = 1}^N \sigma_i^{y_i} (1 - \sigma_i)^{1 - y_i} \right) \mathrm{exp}(-\frac{\lambda}{2} \omega^\top \omega) d\omega}$$ Note that we can't compute the normalizing denominator, so we need a different way of computing the posterior. #### MAP inference The MAP solution is given by \begin{align*} \omega_{MAP} &= \mathrm{arg}\max_\omega \log(p(\omega | D))\\ &= \mathrm{arg}\max_\omega \log(p(D | \omega)) + \log(p(\omega)) + \mathrm{const}\\ &= \mathrm{arg}\max_\omega \left(\sum_{i = 1}^N y_i \omega^\top x_i - \log(1 + e^{\omega^\top x_i}) \right) - \frac{\lambda}{2} \omega^\top \omega + \mathrm{const} \end{align*} We can iteratively find this maximum using gradient descent: First, initialize $\omega = \vec{0}$. Then, for each step $t$ set $\omega_t = \omega_{t - 1} - (\nabla_\omega^2 L)^{-1} \nabla_\omega L$ where $L$ is the loss function $$L = \sum_{i = 1}^N (y_i \omega^\top x_i - \log( 1 + \mathrm{exp}(\omega^\top x_i)) - \frac{\lambda}{2} \omega^\top \omega$$ and iteratively continue this process until $\omega$ doesn't change. From above, we can compute $$\nabla_\omega L = \sum_{i = 1}^N (y_i - \sigma_i) x_i - \lambda \omega_{t - 1}$$ and $$\nabla^2_\omega L = \left(\sum_{i = 1}^N \sigma_i(1 - \sigma_i) x_i x_i^\top\right) - \lambda\mathbb{I}$$ #### Laplace Approximation If we actually want to approximate $p(\omega | D)$, we can use Laplace approximation, a completely general model for models with non-conjugate priors. The posterior distribution is $$p(\omega | D) \propto \mathrm{exp}(\log(p(D | \omega) p(\omega))) = \mathrm{exp}(\log(p(D, \omega))$$ Laplace approximation approximates this log joint-likelihood $p(\omega | D)$ with the second order Taylor expansion of a Gaussian $q(\omega | D) = \mathcal{N}(\omega | \omega_{MAP}, \Sigma)$ centered about the MAP, giving \begin{align*} q(\omega | D) &\propto \mathrm{exp}\left(\log(p(D, \omega_{MAP}) + (\omega - \omega_{MAP})^\top \nabla_\omega \log(p(D, \omega)|_{\omega_{MAP}} + \frac{1}{2} (\omega - \omega_{MAP})^\top \nabla_\omega^2 \log(p(D, \omega))|_{\omega_{MAP}} (\omega - \omega_{MAP})\right)\\ &\propto \mathrm{exp}\left(-\frac{1}{2} (\omega - \omega_{MAP})^\top (-\nabla^2 \log(p(D, \omega))|_{\omega_{MAP}}) (\omega - \omega_{MAP})\right)\\ &= \mathcal{N}(\mu, \Sigma) \end{align*} because the first term is constant and the gradient at the maximum $w_{MAP}$ is equal to $0$, and where $\mu = w_{MAP}$ and $$\Sigma = (-\nabla_\omega^2 \log(p(D, \omega))|_{\omega_{MAP}})^{-1}$$ In the specific case of the logistic regression problem, we have $$\Sigma = (\lambda I + \sum_{i = 1}^N \sigma_I(1 - \sigma_i)X_i X_i^\top )^{-1}$$ #### Multiclass Logistic Regression In the multiclass setting, we instead have $y \in \{1, \ldots, K\}$. Now, each class $k$ has a corresponding vector $\omega_k$ , and instead of a sigmoid function we use the softmax function $$\sigma(x^\top \omega_k) = \frac{e^{x^\top \omega_k}}{\sum_j e^{x^\top \omega_j}}$$ So, the probability of the labels given the data and coefficients is \begin{align*} P(y_1, \ldots y_N | x_1, \ldots x_N, \omega_1 \ldots \omega_K) &= \prod_{i = 1}^N p(y_1 | x_i, \omega_{1:K})\\ &= \prod_{i = 1}^N\left(\prod_{k = 1}^K \left(\frac{e^{x_i^\top \omega_k}}{\sum_j e^{x_i^\top \omega_j}} \right)^{\mathbf{1}(y_i = k)} \right)\\ &= \prod_{i = 1}^N \mathrm{Mult}(y_i | \sigma_i) \end{align*} where $\mathbf{1}(y_i = k)$ is the indicator function, and we denote $$\sigma_i(k) = \left(\frac{e^{x_i^\top \omega_k}}{\sum_j e^{x_i^\top \omega_j}} \right)$$ Our prior is then $$p(\omega_k) = \mathcal{N}(0, \lambda^{-1}\mathbb{I}) = \frac{\lambda^{d/2}}{(2\pi)^{d/2}} \mathrm{exp}\left(\frac{-\lambda}{2} \omega_k^\top \omega_k \right)$$ which gives the log-posterior \begin{align*} \log(p(\omega_1, \ldots, \omega_k | D)) &= \log\left(\prod_{i = 1}^N p(y_i|\omega_{1:K}, x_i)\right) + \log\left(\prod_{k = 1}^K p(\omega_k) \right) + \mathrm{const}\\ &= \left(\sum_{i = 1}^N \sum_{k = 1}^K \mathbf{1}(y_i = k) \omega_k^\top x_i\right) - \sum_{i = 1}^N \log\left(\sum_{j = 1}^K e^{\omega_j^\top x_i}\right) - \sum_{k= 1}^K \frac{\lambda}{2} \omega_k^\top \omega_k + \mathrm{const} \end{align*} From this, we can find an appropriate $\omega$ with gradient descent. ## Hierarchical Models In linear classification and regression, we have a single object $\omega$ which we we want a posterior distribution on. In Hierarchical models, there are many objects we want posterior distributions for. This generally means that there are many variables interacting (so the model is more complex than prior, likelihood, posterior). This often means that certain parameters are more than one “step” way from the data, so that priors may have their own priors. ### Matrix Completion Say we are given a matrix $M \in \mathbb{R}^{N_1 \times N_2}$ which has many values “missing”, that is we only know the true value for about $5\%$ of its entries. This situation arises in collaborative filtering settings, where the rows correspond to users and the columns correspond to ratings. Each user has probably only rated a few things, and we want to be able to predict their rating for some product they have not rated yet based on other user's rating. We believe that this matrix is low-rank, and model it as $M \approx uv + \mathrm{noise}$ where $u \in \mathbb{R}^{N_1 \times d}$ with rows $u_i \in \mathbb{R}^d$ and $v \in \mathbb{R}^{d \times N_2}$ with columns $v_j \in \mathbb{R}^d$. #### Probabilistic Matrix Factorization We can attack this problem from a Bayesian perspective. Let $\Omega = \{(i, j) : M_{i, j} \mathrm{observed}\}$ be the set of indices of observed entries of $M$. In the probabilistic matrix factorization formulation, we say that $M_{i, j} \sim_{i.i.d.} \mathcal{N}(u_i^\top v_j, \sigma^2 \mathbb{I})$ for $(i, j) \in \Omega$. We use priors $u_i \sim_{i.i.d.} \mathcal{N}(0, \lambda^{-1}\mathbb{I})$ and $v_j \sim_{i.i.d.} \mathcal{N}(0, \lambda^{-1}\mathbb{I})$. In this formulation, there is no orthonormality constraint on the factorization. We also must pick $d$. Once the model is fit, we can predict a missing value using the learned $u_i$ and $v_j$. In contrast to regression/classification, we two separate variables $u$ and $v$ that we need to learn. Given the data, we need to learn our posterior, which we can write down as \begin{align*} P(u_{1:N_1}, v_{1:N_2} | D) &= \frac{p(D | u_1, \ldots, u_{N_1}, v_1, \ldots, v_{N_2})p(u_1, \ldots, u_{N_1}, v_1, \ldots, v_{N_2})}{p(D)}\\ &= \frac{\left(\prod_{(i, j) \in \Omega} p(M_{ij} | u_i, v_j)\right)\left(\prod_{i = 1}^{N_1} p(u_i) \right)\left(\prod_{j = 1}^{N_2} p(v_j)\right)}{p(D)} \end{align*} where we were able to separate out the products based on the fact that our model assumes that the observed entries in $M$ are independent given $u$ and $v$ and that $M_{ij}$ is conditionally independent of everything except $u_i$ and $v_j$ and that the priors on $u_i$ and $v_j$ are independent. Now, the normalizing constant $p(D)$ is $$p(D) = \int_{u_1} \cdots \int_{u_{N_1}} \int_{v_1} \cdots \int_{v_{N_2}} \left(\prod_{(i, j) \in \Omega} p(M_{ij} | u_i, v_j)\right)\left(\prod_{i = 1}^{N_1} p(u_i) \right)\left(\prod_{j = 1}^{N_2} p(v_j)\right) dv_{N_2} \cdots dv_1 du_{N_1} \cdots du_1$$ which is conditionally conjugate, but the model variables become coupled in the likelihood and so is not tractable to compute. However, we can use MAP inference by finding \begin{align*} u^{MAP}_{1:N_1}, v^{MAP}_{1:N_2} &= \mathrm{arg}\max_{u_{1:N_1}, v_{1:N_2}} \sum_{(i, j) \in \Omega} \log\left(p(M_{ij} | u_i, v_j)\right) + \sum_{i = 1}^{N_1} \log\left(p(u_i)\right) + \sum_{j = 1}^{N_2} \log\left(p(v_j)\right)\\ &= \mathrm{arg}\max_{u_{1:N_1}, v_{1:N_2}} \sum_{(i, j) \in \Omega} -\frac{1}{2\sigma^2} \left(M_{ij} - u_i^\top v_j\right)^2 + \sum_{i = 1}^{N_1} -\frac{\lambda}{2} u_i u_i^\top + \sum_{j = 1}^{N_2} -\frac{\lambda}{2} v_j^\top v_j + \mathrm{const}\\ &= \mathrm{arg}\max_{u_{1:N_1}, v_{1:N_2}} \mathcal{L} \end{align*} where in the last step we simply defined $\mathcal{L}$ for this particular problem. #### Solving using Coordinate Ascent We can solve an optimization problem like the one above with coordinate ascent, which is a simple algorithm which optimizes by each free variable individually until convergence as follows: First, initialize $u_{1:{N_1}}, v_{1:{N_2}}$ (e.g. randomly), then, until convergence, optimize $\mathcal{L}$ separately with respect to each $u_i$ and $v_j$ holding everything else fixed. The optimization step for one of the free variables (here, e.g., $u_i$) can be seen by differentiating and setting equal to zero by \begin{align*} \nabla_{u_i} \mathcal{L} &= \sum_{j : (i, j) \in \Omega} \frac{1}{\sigma^2} (M_{ij} - u_i^\top v_j)v_j - \lambda u_i = 0\\ &= \frac{1}{\sigma^2} \sum_{j : (i, j) \in \Omega} M_{ij} v_j - \frac{1}{\sigma^2} \left(\sum_{j : (i, j) \in \Omega} v_j v_j^\top\right)u_i - \lambda u_i = 0\\ \rightarrow u_i &= \left(\lambda \sigma^2 \mathbb{I} + \sum_{j : (i, j) \in \Omega} v_j v_j^\top \right)^{-1} \left(\sum_{j : (i, j) \in \Omega} M_{ij} v_j \right) \end{align*} Recall that in L2-regularized least squares setting (ridge regression) where $y \sim \mathcal{N}(X\omega, \sigma^2\mathbb{I})$ and $\omega \sim \mathcal{N}(0, \lambda^{-1}\mathbb{I})$ yielded the MAP solution $\omega_{MAP} = (\lambda \sigma^2 \mathbb{I} + X^\top X)^{-1}X^\top y$. By inspecting the form of $u_i$ above, and letting $S_{u_i} = \{j : (i, j) \in \Omega\}$, $\vec{m}_{u_i} = M_{i, S_{u_i}}$ and $V_i$ be a matrix whose rows are $v_{S_{u_i}}$ our model becomes $\vec{m}_{u_i} \sim \mathcal{N}(V_i, u_i, \sigma^2\mathbb{I}), u_i \sim \mathcal{N}(0, \lambda^{-1}\mathbb{I})$ which yields $u_i^{MAP} = (\lambda\sigma^2\mathbb{I} + V_i^\top V_i)^{-1} V_i^\top \vec{m}_{u_i}$ which is the same as what was derived above, except in matrix form. So, at each iteration, we need to solve $N_1 + N_2$ ridge regression problems. The “design matrix” (corresponding to $X$ in $y \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ is changing, we have to iterate several times until convergence. It would be nice to get the posterior distribution of $u_{1:N_1}$ and $v_{1:N_2}$ but they become coupled in the likelihood so we write $p(u_i | D)$ and $p(v_j | D)$ separately, that is, we can't write $$p(u_{1:N_1}, v_{1:N_2} | D) \ne \left(\prod_{i = 1}^{N_1} p(u_i | D) \right)\left(\prod_{j = 1}^{N_2} p(v_j | D)\right)$$ as we were able to in the prior. However, we do have “conditional conjugacy”, in that we can write $$p(u_i | D, u_{\not{i}}, v_{1:{N_2}}) \propto p(D | u_i, v_{1:{N_2}})p(u_i)$$ where $u_{\not{i}}$ denotes all $u_j$ such that $j \ne i$ and $p(D | u_i, v_{1:{N_2}})$ is an independent Gaussian and $p(u_i)$ is a Gaussian prior on “regression coefficents”. Using our matrix formulation described above, we can write $$p(u_i | D, V_i) = \mathcal{N}(u_i | \mu, \Sigma); \mu = (\lambda \sigma^2\mathbb{I} + V_i^\top V_i)^{-1} V_i^\top \vec{m}_{u_i}; \Sigma = \left(\lambda \mathbb{I} + \frac{1}{\sigma^2} V_i^\top V_i\right)^{-1}$$ It follows that conditioned on everything else, $u_i$ has a Gaussian prior, just as for the linear regression problem. #### Solving using Gibbs Sampling Obtaining a point-estimate solution is useful, but somewhat defeats the purpose of a Bayesian/probabilistic model which should capture uncertainty in all possible solutions. Sampling methods are another way to obtain an estimate of the solution, but rather than picking a specific point they sample from the posterior. The samples can then be used to estimate statistics on possible solutions. In a general sense, we want the posterior $p(\theta_1, \ldots, \theta_N | D)$ but we can't compute it. In Gibbs sampling, we instead calculate $p(\theta_i | \theta_{\not{i}}, D)$ and sample from it as follows: First, initialize $\theta_1^{(0)}, \ldots, \theta_N^{(0)}$, then for each step $t$ sample $\theta_i^{(t)} \sim p(\theta_i | \theta_1^{(t)}, \ldots, \theta_{i - 1}^{(t)}, \theta_{i + 1}^{(t - 1)}, \ldots, \theta_N^{(t - 1)}, D)$. After a “burn-in period”, e.g. $t = 1000$, we can start collecting samples of $\theta_1^{(t)}, \ldots, \theta_N^{(t)}$, skipping every e.g. 100 samples to account for correlations. In the specific example of Gibbs sampling for the matrix factorization model, we begin by randomly initializing $u_{1:N_1}$ and $v_{1:N_2}$ and, for each iteration, construct $\vec{m}_{u_i}$ and $V_i$ using the most recent values then sample $u_i \sim \mathcal{N}(\mu, \Sigma)$ and set $\mu = (\lambda \sigma^2 \mathbb{I} + V_i^\top V_i)^{-1} V_i^\top \vec{m}_{u_i}$ and do the same thing for $v_j$ except that $\vec{m}_{v_j}$ is from a column $u_j$ and pulls from $u_1, \ldots, u_{N_1}$. ### Sparse Regression Recall that in the regression setting, we have data $y \in \mathbb{R}^N$ and covariates $X \in \mathbb{R}^{N \times d}$. We assume $y$ is produced by the likelihood model $y \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ with $\omega$ unknown, with prior $\omega \sim \mathcal{N}(0, A^{-1})$. In the case where $d \ll N$, we might believe that only a small subset of the dimensions of $X$ are important for predicting $y$. We'd like to be able to select these dimensions implicitly while estimating $\omega$. We do this using a “sparsity promoting prior”. If we say that $\omega_k \sim \mathcal{N}(0, \alpha_k^{-1}$ so that $A = \mathrm{diag}(\alpha_1, \ldots, \alpha_d)$ then we can encourage sparity by encouraging large values for some of the $\alpha_k$ - in this way, the prior variance of $\omega_k$ approaches zero (as $\alpha_k \rightarrow \infty, \alpha_k^{-1} \rightarrow 0$), so that $\omega_k$ will be $0$ with high probability. A Gammma prior on $\alpha_k$ can be used to encourage this, so that our model is given by $$y \sim \mathcal{N}(X\omega, \sigma^2\mathbb{I}); \omega \sim \mathcal{N}(0, A^{-1}); \alpha_k \sim_{i.i.d.} \mathrm{Gamma}(a, b)$$ so that $$p(\alpha_k | a, b) = \frac{b^a}{\Gamma(a)} \alpha_k^{a - 1}e^{-b\alpha_k}$$ We can find the MAP parameter estimates by computing \begin{align*} P(\omega, \alpha_{1:d} | X, y) &\propto p(y | X, \omega, \alpha_{1:d})p(\omega, \alpha_{1:d})\\ &\propto p(y | X, \omega)p(\omega | \alpha_{1:d})p(\alpha_{1:d})\\ &\propto p(y | X, \omega)p(\omega | \alpha_{1:d}) \prod_{i = 1}^d p(\alpha_i)\\ \rightarrow \log\left(p(\omega, \alpha_{1:d} | X, y)\right) &= \log\left(p(y | X, \omega)\right) + \log\left(p(\omega | \alpha_{1:d})\right) + \sum_{i = 1}^d \log\left(p(\alpha_i)\right) + \mathrm{const}\\ &= -\frac{1}{2\sigma^2} (y - X\omega)^\top(y - X\omega) + \sum_{i = 1}^d \left( -\frac{\alpha_i}{2}\omega_i^2 + \frac{1}{2}\log(\alpha_i) \right) + \sum_{i = 1}^d \left((\alpha - 1)\log(\alpha_i) - b\alpha_i\right) + \mathrm{const}\\ &= \mathcal{L} \end{align*} which gives $$\omega_{MAP}, \alpha^{MAP}_{1:d} = \mathrm{arg}\max_{\omega, \alpha_{1:d}} \mathcal{L} = \mathrm{arg}\max_{\omega, \alpha_{1:d}} \log\left(p(\omega, \alpha_{1:d} | X, y)\right)$$ We can perform this optimization using MAP estimation. First, initialize $\omega$, e.g. to the minimum L2 solution $\omega = X^\top (X X^\top)^{-1}y$. Then, iteratively update $\omega$ by $$\nabla_\omega \mathcal{L} = 0 \rightarrow \omega = \frac{\left(A + \frac{1}{\sigma^2} X^\top X\right)^{-1} X^\top y}{\sigma^2}$$ and update $\alpha_i$ by $$\frac{\partial \mathcal{L}}{\partial \alpha_i} = 0 \rightarrow \alpha_i = \frac{2a - 1}{\omega_i^2 + 2b}$$ We can encourage $\alpha_i$ to grow to $\infty$ when $\omega_i \rightarrow 0$ by setting $b = 10^{-16}$ and $a = \frac{1}{2} + \epsilon$, where the $\epsilon$ parameter sets how fast the values grow. Note that $\left(A + \sigma^{-2} X^\top X\right)^{-1}$ is the inversion of a $d \times d$ matrix, and in this setting $d$ is large while $N$ is small and $A$ is diagonal. The matrix inversion lemma is simply the relation $$(A + BC^{-1}D)^{-1} = A^{-1} - A^{-1}B(C + DA^{-1}B)^{-1}DA^{-1}$$ which allows us to write $$(A + \sigma^{-2}X^\top X)^{-1} = A^{-1} - A^{-1}X^\top(\sigma^2 \mathbb{I} + XA^{-1}X^\top)^{-1} XA^{-1}$$ which involves only the inversion of a diagonal matrix and a $N \times N$ matrix. #### Type II Maximum Likelihood Sometimes, we can learn better models by integrating out the uncertainty (prior), although this may make the model intractable. In the current example, our uncollapsed model is given by $y | \omega \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ and $\omega | A \sim \mathcal{N}(0, A^{-1})$. We can collapse this model so that $p(y | A) = \int_{\omega} p(y | \omega) p(\omega | A)d\omega$ and $y | A \sim \mathcal{N}(0, \sigma^2 \mathbb{I} + XA^{-1}X^\top)$. In type II maximum likelihood, we use a Bayesian model on $\omega$ but do not use a prior on $A$; we integrate out $\omega$ and perform maximum likelihood on the implied prior on $\omega$ by computing $\alpha_{1:d}^{ML2} = \mathrm{arg}\max_{\alpha_{1:d}} \log(p(y | A))$. This gives a loss of $$\mathcal{L} = \log(p(y | a)) = -\frac{1}{2} \log\left(\left|\sigma^2 \mathbb{I} + XA^{-1}X^\top \right|\right) - \frac{1}{2} y^\top (\sigma^2 \mathbb{I} + XA^{-1}X^\top)^{-1}y + \mathrm{const}$$ If we let $C_{\not{i}} = \sigma^2\mathbb{I} + \sum_{j \ne i} \alpha_j^{-1} x_j x_j^\top$ where $x_j$ is the $j$th column of $X$, and using the determinant trick ($|C_{\not{i}} + \alpha_i^{-1} x_i x_i^\top| = |C_{\not{i}}|(1 + \alpha_i^{-1} x_i^\top C_{\not{i}}^{-1}x_i)$ and the matrix inversion lemma we can write $$\mathcal{L} = \frac{1}{2}\log(|C_{\not{i}}|) - \frac{1}{2} \log(1 + \alpha_i^{-1} x_i^\top C_{\not{i}}^{-1} x_i) - \frac{1}{2} y^\top C_{\not{i}}^{-1} y + \frac{1}{2} \left(\frac{(y^\top C_{\not{i}} x_i)^2}{\alpha_i + x_i^\top C_{\not{i}}^{-1}x_i}\right)$$ and we can compute our update by setting \begin{align*} \frac{\partial\mathcal{L}}{\partial \alpha_i} &= \frac{1}{2} \left(\frac{x_iC_{\not{i}}^{-1} x_i \alpha_i^{-1}}{\alpha_i + x_i^\top C_{\not{i}}^{-1} x_i}\right) - \frac{1}{2}\left(\frac{(y^\top C_{\not{i}}^{-1}x_i)^2}{\alpha_i + x_i^\top C_{\not{i}}^{-1} x_i}\right) = 0\\ \rightarrow \alpha_i = \frac{x_i^\top C_{\not{i}}^{-1} x_i}{(y^\top C_{\not{i}}^{-1}x_i)^2 + x_i^\top C_{\not{i}}^{-1} x_i} \end{align*} However, this is only a solution when it is positive (that is, $(y^\top C_{\not{i}}^{-1}x_i)^2 > x_i^\top C_{\not{i}}^{-1}x_i$) because $\alpha_i > 0$; otherwise, $\alpha_i \rightarrow \infty$ is always a solution. This produces the following algorithm: First, initialize $\alpha_i = 0$ for all $i$. Then, until convergence, for each $i \in \{1, \ldots, d\}$, construct $C_{\not{i}}$ and set $$\alpha_i = \frac{x_i^\top C_{\not{i}}^{-1} x_i}{(y^\top C_{\not{i}}^{-1}x_i)^2 + x_i^\top C_{\not{i}}^{-1} x_i}$$ when $(y^\top C_{\not{i}}^{-1}x_i)^2 > x_i^\top C_{\not{i}}^{-1}x_i$ and $\infty$ otherwise. The updates of $C_{\not{i}}$ can be rank-one, that is, numerically cheap. Once $A = \mathrm{diag}(\alpha_1, \ldots, \alpha_d)$ is computed we can learn the posterior on $\omega$ and use it for prediction by \begin{align*} y &\sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})\\ \omega &\sim \mathcal{N}(0, A^{-1})\\ p(\omega | y, X, A) &\propto p(y | \omega, X)p(\omega | A)\\ p(\omega | y, X, A) &= \mathcal{N}(\omega | \mu, \Sigma)\\ \mu &= \frac{\left(A + \frac{1}{\sigma^2}X^\top X\right)^{-1} X^\top y}{\sigma^2}\\ \Sigma &= \left(A + \frac{1}{\sigma^2}X^\top X\right)^{-1}\\ p(y_{new} | X_{new} y, X, A) &= \int p(y_{new} | X_{new}, \omega)p(\omega | y, X, A)d\omega\\ &= \mathcal{N}(y_{new} | \mu_{new}, \sigma_{new}^2)\\ \mu_{new} &= X_{new}^\top\mu\\ \sigma_{new}^2 &= \sigma^2 + X_{new}^\top \sigma X_{new}\\ \end{align*} ## Clustering In the clustering setting we are given data $\{x_1, \ldots, x_N\}$ where $x_i \in \mathbb{R}^d$ without labels. We'd like to partition them into $K$ groups so that points in the same group are “similar” by some measure and points in different groups are “different”. If we measure similarity based on distance, then points within the same group will be close in $\mathbb{R}^d$ and points in different groups should be far apart. ### K-means K-means is a non-probabilistic clustering model which shares many characteristics with probability-based ones. The K-means algorithm starts with an implicit assumption of there being $K$ clusters in the input data, each of which is characterized by its centroid, denoted $\mu_1, \ldots, \mu_K$ where $\mu_i \in \mathbb{R}^d$. Each datapoint is then assigned to the cluster whose centroid it is closest to. The K-means algorithm learns the location of the centroids, and therefore performs the clustering. Let $c_n \in \mathbb{R}^{N \times K}$ be the cluster assignment vector so that $$c_{nj} = \begin{cases} 1,& x_n \in \mathrm{cluster \;} j\\ 0,& \mathrm{otherwise}\\ \end{cases}$$ We can formulate the “cost” of some clustering by computing the sum of the distances between each point and the cluster it is assigned to by $$J = \sum_{n = 1}^N \sum_{j = 1}^K c_{nj} \|x_n - \mu_j\|^2$$ #### Algorithm Minimizing this objective leads naturally to an algorithm which iteratively alternates between updating $c_n$ for $n = 1, \ldots, N$ and then $\mu_j$ for $j = 1, \ldots, k$. More specifically, at each iteration, we hold $\mu_1, \ldots, \mu_K$ fixed and for each $n \in \{1, \ldots, N\}$ set $c_{nj} = 1$ when $j = \mathrm{arg}\min_i \|x_n - \mu_i\|^2$ and $0$ otherwise. Then, for each $j \in \{1, \ldots, K\}$ we hold $c_{nj}$ fixed and minimize $\sum_n c_{nj}\|x_n - \mu_j\|^2$ which we can differentiate and set equal to zero to obtain $$\mu_j = \frac{\sum_n c_{nj}x_n}{\sum_n c_{nj}}$$ which is just the mean of the data points in cluster $j$. ### Gaussian Mixture Models GMMs are a probabilistic clustering scheme where the density of $\{x_1, \ldots, x_N\}, x_i \in \mathbb{R}^d$ is modeled by a mixture of $K$ Gaussians so that $$p(x | \mu_{1 : K}, \sigma_{1 : K}, \pi) = \sum_{j = 1}^K \pi_j \mathcal{N}(x, \mu_j, \sigma_j)$$ where $\pi \in \mathbb{R}^K$ is the probability vector so that $\pi_j < 0$ and $\sum_j \pi_j = 1$ and $\mu_j$ and $\Sigma_j$ are the mean and covariance for cluster $j$ respectively. If the data is assumed to be conditionally independent, the joint likelihood is given by $$p(x_1, \ldots, x_N | \pi, \mu, \Sigma) = \prod_{n = 1}^N p(x_n | \pi, \mu, \Sigma) = \prod_{n = 1}^N \left(\sum_{j = 1}^K \pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)\right)$$ which gives a log-likelihood $$\log(p(x_{1 : N} | \pi, \mu, \Sigma)) = \sum_n \log \left( \sum_j \pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)\right)$$ which is intractable due to the summation within the logarithm. However, we can make this quantity much easier to work with by introducing hidden data/latent variables to the model. #### Latent Variables We can add variables $c_n$ to the GMM model so that we keep the desired marginals. We want it to be the case that $$\sum_{c_n} p(x_n, c_n | \mu, \Sigma, \pi) = p(x_n | \mu, \Sigma, \pi) = \sum_{j = 1}^K \pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)$$ This leads to a hierarchical process where we sample $x_n | c_n \sim \mathcal{N}(\mu_{c_n}, \Sigma_{c_n})$ (where $\mu_{c_n}$ and $\Sigma_{c_n}$ denote (by abuse of notation) the mean and variance of the Gaussian which $x_n$ is assigned to) then sample $c_n \sim_{iid} \mathrm{Mult}(\pi)$. Summing over the likelihood gives $$\sum_{c_n} p(x_n, c_n | \mu, \Sigma, \pi) = \sum_{c_n} p(x_n | c_n, \mu, \Sigma) p(c_n | \pi) = \sum_{j = 1}^K \mathcal{N}(x_n | \mu_j, \Sigma_j)\pi_j$$ If we keep $c_n$ as part of the model, then the joint likelihood becomes \begin{align*} p(x_{1:N}, c_{1:N} | \mu, \Sigma, \pi) &= \prod_{n = 1}^N p(x_n, c_n | \mu, \Sigma, \pi)\\ &= \prod_{n = 1}^N p(x_n | c_n, \mu, \Sigma)p(c_n | \pi)\\ &= \prod_{n = 1}^N \prod_{j = 1}^K \mathcal{N}(x_n | \mu_j, \Sigma_j)^{c_{nj}} \pi_j^{c_{nj}} \end{align*} Note that there is no longer a summation, so our log-likelihood will be tractable. We can compute the log-likelihood by $$\log(p(x_{1:N}, c_{1:N} | \mu, \Sigma, \pi)) = \sum_{n = 1}^N \sum_{j = 1}^K c_{nj} \left(\log(\pi_j) - \frac{1}{2}(x_n - \mu_j)^\top \Sigma_j^{-1} (x_n - \mu_j) - \frac{1}{2}\log(|\Sigma_j|)\right) + \mathrm{const}$$ #### Maximum Likelihood Algorithm As with K-means, we perform an iterative alternating procedure, first holding $\mu, \sigma, \pi$ fixed and maximizing over $c_n$ and then holding $c_n$ fixed and fitting $\pi, \mu, \Sigma$. Specifically, at each iteration, we first set $c_{nj} = 1$ when $j = \mathrm{arg}\max_i \left(\log(\pi_j) + \log(\mathcal{N}(x_n | \mu_j, \Sigma_j))\right)$ and $0$ otherwise for each $n \in \{1, \ldots, N\}$. Then, based on computing partial derivatives of the likelihood and setting equal to zero, we hold $c_{nj}$ fixed and set $$\mu_j = \frac{\sum_n c_{nj}x_n}{\sum_n c_{nj}}$$ (the mean of the data in each cluster), $$\Sigma_j = \frac{\sum_n c_{nj} (x_n - \mu_j)(x_n - \mu_j)^\top}{\sum_n c_{nj}}$$ (the covariance of the data in each cluster), and, using Lagrange multipliers to enforce $\sum_j \pi_j = 1$, $$\pi_j = \frac{\sum_n c_{nj}}{N}$$ (the empirical distribution of the assignments). Note that we are using maximum likelihood because $\mu, \Sigma$ and $\pi$ don't have priors. As a result, $\Sigma_j$ will be full rank of $\sum_n c_{nj} < d$, which is undesirable but could be handled with a prior on $\Sigma_j$. Furthermore, we can interpret the $c_n$ update step by noting that \begin{align*} \mathrm{arg}\max_j \left( \log(\pi_j) + \mathcal{N}(x_n | \mu_j, \Sigma_j)\right) &= \mathrm{arg}\max_j \pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)\\ &= \mathrm{arg}\max_j \frac{\pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)}{\sum_i \pi_i \mathcal{N}(x_n | \mu_i, \Sigma_i)} \end{align*} which is the MAP solution for $c_n$ because \begin{align*} p(c_{nj} = 1 | x_n, \pi, \mu, \Sigma) &\propto p(x_n | c_{nj} = 1, \mu, \Sigma)p(c_{nj} = 1 | \pi)\\ &\propto \mathcal{N}(x_n | \mu_j, \Sigma_j)\pi_j \end{align*} This suggests that we may be able to compute a posterior for $c_{nj}$ rather than doing a hard assignment. ### The EM Algorithm Taking the GMM as an example, we want to learn the maximum likelihood solution to $$p(x_{1:N} | \theta) = \prod_{n = 1}^N p(x_n | \theta)$$ by maximizing $\sum_n \log(p(x_n | \theta))$. If we introduce the latent class assignment variables $c_n$ to obtain the joint distribution $p(x_n, c_n | \theta)$ we can marginalize to obtain an alternate form for $p(x_n | \theta)$ given by $$p(x_n | \theta) = \sum_{c_n} p(x_n, c_n | \theta)$$ We can therefore equivalently maximize $$\sum_{n = 1}^N \log\left(\sum_{c_n} p(x_n, c_n | \theta) \right)$$ which can be made tractable by assuming conditional conjugacy. That is, given the observed data $x_n$ and parameter setting $\theta^{(old)}$, we can compute a posterior distribution for the hidden data $p(c_n | x_n, \theta)$ analytically. In this way, we perform the optimization in stages, which is the core principle of the EM algorithm, which we can formulate as follows: First, given the current value of $\theta$ (called $\theta^{(old)}$), we calculate the posterior $p(c_n | x_n, \theta^{(old}))$. Second, we construct the objective function \begin{align*} Q(\theta, \theta^{(old)}) &= \sum_{n = 1}^N \sum_{c_n} p(c_n | x_n, \theta^{(old)})\log(p(x_n, c_n | \theta))\\ &= \sum_{n = 1}^N \mathbb{E}(\log(p(x_n, c_n | \theta)) | \theta^{(old)}) \end{align*} Finally, update $\theta$ by optimizing $Q$ by setting $\theta^{(old)} = \mathrm{arg}\max_\theta Q(\theta, \theta^{(old)})$. This process is repeated until convergence. Constructing the objective function is called the E (expectation) step, and maximizing $Q$ is called the M (maximization) step. For MAP estimation on $\theta$, we can add $\log(p(\theta))$ to $Q(\theta, \theta^{(old)})$ and then optimize. Generally, the algorithm is considered to have converged when the value of $\theta^{(old)}$ doesn't change much in the M step. ##### Example: GMM In the GMM setting, $\theta = \{\mu_{1:K}, \Sigma_{1:K}, \pi\}$. Let $\gamma_{nj}$ be the posterior of $c_{nj}$ given by $$\gamma_{nj} = p(c_{nj} = 1 | x_n, \mu, \Sigma, \pi) = \frac{\pi_j \mathcal{N}(x_n | \mu_j, \Sigma_j)}{\sum_{i = 1}^K \pi_i \mathcal{N}(x_n | \mu_i, \Sigma_i)}$$ Now, from above we have $p(x_n, c_{nj} = 1 | \mu, \Sigma, \pi) = \log(\pi_j) + \log(\mathcal{N}(x_n | \mu_j, \Sigma_j))$ so in the E-step we set \begin{align*} Q(\theta, \theta^{(old)}) &= \sum_{n = 1}^N \sum_{j = 1}^K p(c_{nj} = 1 | x_n, \mu, \Sigma, \pi) \log(p(x_n, c_{nj} | \mu, \Sigma, \pi))\\ &= \sum_{n = 1}^N \sum_{j = 1}^K \gamma_{nj} \left(\log(\pi_j) - \frac{1}{2}(x_n - \mu_j)^\top \Sigma_j^{-1} (x_n - \mu_j) - \frac{1}{2} \log(|\Sigma_j|)\right) \end{align*} and in the M-step we compute the partial derivatives of $Q(\theta, \theta^{(old)})$ with respect to each parameter to set \begin{align*} \mu_j &= \frac{\sum_n \gamma_{nj}x_n}{\sum_n \gamma_{nj}}\\ \Sigma_j &= \frac{\sum_n \gamma_{nj} (x_n - \mu_j)(x_n - \mu_j)^\top}{\sum_n \gamma_{nj}}\\ \pi_j &= \frac{\sum_n \gamma_{nj}}{N} \end{align*} (using Lagrange multipliers as before). Note that this formulation is the same as above except that we have “soft” assignment. #### Theoretical Motivation For simplicity, assume we only have one observation $x$ so that we seek to maximize $\log(p(x | \theta))$ with respect to $\theta$. Given some other model variable $c$, we can marginalize to relate $p(c | x, \theta)p(x | theta) = p(x, c | \theta)$ so that $\log(p(x | theta)) = \log(p(x, c | \theta)) - \log(p(c | x, \theta))$. Now, we introduce some distribution $q(c)$ on $c$ whose parameters are hidden and whose form is unknown. We can now compute \begin{align*} &\log(p(x | \theta)) = \log(p(x, c | \theta)) - \log(p(c | x, \theta))\\ \rightarrow & q(c)\log(p(x | \theta)) = q(c)\log(p(x, c | \theta)) - q(c)\log(p(c | x, \theta))\\ \rightarrow & \sum_c q(c)\log(p(x | \theta)) = \sum_c q(c)\log(p(x, c | \theta)) - \sum_cq(c)\log(p(c | x, \theta))\\ \rightarrow & \log(p(x | \theta)) = \sum_c q(c)\log(p(x, c | \theta)) - \sum_c q(c)\log(p(c | x, \theta))\\ \rightarrow & \log(p(x | \theta)) = \sum_c q(c)\log(p(x, c | \theta)) - \sum_c q(c)\log(q(c)) + \sum_c q(c)\log(q(c)) - \sum_c q(c)\log(p(c | x, \theta))\\ \rightarrow & \log(p(x | \theta)) = \sum_c q(c)\log\left(\frac{p(x, c | \theta)}{q(c)}\right) + \sum_c q(c)\log\left(\frac{q(c)}{p(c | x, \theta)}\right) \end{align*} The term $$\sum_c q(c)\log\left(\frac{p(x, c | \theta)}{q(c)}\right)$$ is a loss function $\mathcal{L}(q, \theta)$, and the term $$\sum_c q(c)\log\left(\frac{q(c)}{p(c | x, \theta)}\right)$$ is the KL divergence between $q$ and $p$ (which is zero when $q = p$). So, the EM algorithm is a method for maximizing $\log(p(x | \theta))$ by working instead with $\mathcal{L}$ and the KL divergence in a two-step process. In the E step, we set $q(c) = p(c | x, \theta)$ which implicitly sets $\mathrm{KL}(q || p) = 0$ and leaves $\log(p(x | \theta))$ unchanged because $\theta$ is fixed to $\theta^{(old)}$ in the E step. This increases $\mathcal{L}(q, \theta)$ to $\log(p(x | \theta))$. In the M-step, we maximize $\mathcal{L}(q(c | x, \theta^{(old)}))$ with respect to $\theta$, which is equivalent to setting $$\theta^{(new)} = \mathrm{arg}\max_\theta Q(\theta, \theta^{(old)}) = \mathrm{arg}\max_\theta \sum_c p(c | x, \theta^{(old)}) \log(p(x, c | \theta))$$ This also increases $\mathcal{L}$. Furthermore, $q(c)$ (which is defined as $p(c | x, \theta^{(old)})$) is no longer equal to $p(c | x, \theta^{(new)})$, so $\mathrm{KL}(q || p)$ has been increased too (because the KL divergence is always positive and is only 0 when $p = q$). Our new value $\theta^{(new)}$ therefore satisfies $\log(p(x | \theta^{(new)})) > \log(p(x | \theta^{(old)}))$. Now that $q(c) \ne p(c | x, \theta^{(new)})$, we can return to the E-step. ##### Example: Sparse Regression Recall that in the sparse regression setting we have data $X \in \mathbb{R}^{N \times d}$ with labels $y \in \mathbb{R}^N$, and we model $y \sim \mathcal{N}(X\omega, \sigma^2 \mathbb{I})$ and $\omega_k \sim_{iid} \mathcal{N}(0, \alpha_k^{-1})$ for $k \in \{1, \ldots, d\}$. We can estimate $$\alpha_{1:d} = \mathrm{arg}\max_{\alpha_{1 : d}} \log(p(y | X, \alpha_{1:d}))$$ using the EM algorithm. First, note that we have $$p(y | X, \alpha_{1 : d}) = \int_\omega p(y, \omega | X, \alpha_{1:d}) d\omega$$ so intuitively we let $\omega$ be the latent variables in this model. In the E-step, we calculate $p(\omega | y, X, \alpha_{1 : d}^{(old)}) = \mathcal{N}(\mu, \Sigma)$ by setting $$\mu = \left(A + \frac{1}{\sigma^2} X^\top X\right)^{-1} \frac{X^\top y}{\sigma^2}$$ and $$\Sigma = \left(A + \frac{1}{\sigma^2} X^\top X\right)^{-1}$$ Then, we construct ????????????????? \begin{align*} Q(\alpha_{1:d}, \alpha_{1:d}^{(old)}) &= \int p(\omega | y, X, \alpha_{1:d}^{(old)}) \log(p(y, \omega | x, \alpha_{1:d}))d\omega\\ &= -\frac{1}{2} \left(y^\top y - 2y^\top X \mathbb{E}[\omega] + \mathbb{E}[\omega^\top X^\top X \omega]\right) - \sum_{k = 1}^d \frac{\alpha_k}{2} \mathbb{E}[\omega_k^2] + \frac{1}{2}\log(\alpha_k) + \mathrm{const} \end{align*} And in the M-step, we set $$\frac{\partial Q(\alpha_{1:d}, \alpha_{1:d}^{(old)})}{\partial \alpha_k} = 0$$ Setting equal to zero gives $$\frac{\partial Q}{\partial \alpha_k} = -\frac{1}{2} \mathbb{E}[\omega_k^2] + \frac{1}{2\alpha_k} = 0 \rightarrow \alpha_k = \frac{1}{\mathbb{E}[\omega_k^2]} = \frac{1}{\mu_k^2 + \Sigma_{k, k}}$$
# Why is the $\langle v_{x}^{2} \rangle=\frac{1}{3} \langle v^2 \rangle$? For a randomly moving particle. Or, I suppose that 1/3 could generalise to 1/n, where n is the non rotational degrees of freedom for that particle. Related reference Kinetic Theory of Gasses. - I don't believe that this is the case, I can certainly set up states whose expectation value of velocity in a certain direction is zero: $e^{i p x}$ has $\langle v_y \rangle \sim \langle \partial_y \rangle = 0$. – DJBunk Sep 6 '12 at 16:41 He may have been using it in a context so that I didn't understand that it was a special case, but Feynman in Chapter 39 of Vol 1 disagrees with you. – Alyosha Sep 6 '12 at 18:04 @DJBunk I think this question concerns a statistical average in a large population of particles, not the probabilistic average of quantum mechanics. – David Zaslavsky♦ Sep 6 '12 at 18:15 Yes, that's correct. – Alyosha Sep 6 '12 at 18:16 +1 kinetic theory of gases blew my mind 20 years ago, and you brought it all back with this question. – ja72 Sep 6 '12 at 18:23 If the system is rotationally invariant, then we should have $\langle v_x^2 \rangle = \langle v_y^2 \rangle = \langle v_z^2 \rangle$. Thus $\langle v^2 \rangle = \langle (v_x^2 + v_y^2 + v_z^2 )\rangle$ which gives $\langle v^2 \rangle = 3 \langle v_x^2 \rangle$. Your comment about generalization to n dimensions is also correct. Use dollar signs $around your code – Physics Monkey Sep 6 '12 at 18:31 It's not true. The averages of two quantities can be equal without the quantities being equal. In your example above, clearly the x velocity and y velocity are two different physical quantities. It just happens that their squares have the same average value when you look at many particles. – Physics Monkey Sep 6 '12 at 18:40 @Alyosha: the expectation value is linear, so in general you have$\langle a + b \rangle = \langle a \rangle + \langle b \rangle$. – Fabian Sep 6 '12 at 18:49 Generally$\langle A + B \rangle = \langle A \rangle + \langle B \rangle\$, this is more or less build into the setup. After all expectation values are computed by evaluating certain sums or integrals. – orbifold Sep 6 '12 at 18:53
# Is there a subset of a non regular language that is regular I am just curious is there any non regular language whose subset is regular? - Sure, any finite subset of a language is regular. Or the set of palindromes in $\{a,b\}$ is irregular but the set of palindromes in one letter is regular, if you need an infinite example. –  Thomas Andrews Nov 13 '12 at 18:23 @Thomas what do you mean by a set of palindromes in one letter. Can u give an example –  user34790 Nov 13 '12 at 18:25 Palindromes in one letter are just any $a*$ or $b*$. Basically, any word with only one letter is a palindrome. –  Thomas Andrews Nov 13 '12 at 18:25 Ouch! You've asked 21 questions, and accepted only a handful of answers. If you have received answers which have been particularly helpful, please take the time to show appreciation by accepting such answers. –  amWhy Nov 13 '12 at 18:37 I can provide a cheat sheet from my "Theoretical Computer Science" course (in German). Just have a look at the top picture with all the boxes. From top to bottom (on thr right hand side) the descriptions are: all languages $\Sigma^*$, semi-decidable (Type 0), decidable, LOOP-decidable, NP, P, context free (Type 2), regular (Type 3), finite. Hope this might help :) –  Christian Ivicevic Nov 13 '12 at 18:53 The empty language is a regular subset of any language at all. More generally, every finite subset of any language is regular. For example, let $L$ be the language $\{ a^n | n\text{ is prime} \} = \{ aa, aaa, aaaaa, a^7, \ldots\}$. Let $R$ be the subset of $L$ consisting just of $\{ aa, aaaaa, a^{11}, a^{109} \}$. Then $L$ is not regular, but $R$ is. - Can u give an example? –  user34790 Nov 13 '12 at 18:25 @user34790: Every finite language is regular because you can create a enumerate all elements extensionally. Furthermore there exists a regular expression of the form "word1 | word2 | ... | wordN" which implies the regularity of all finite languages. –  Christian Ivicevic Nov 13 '12 at 18:56 @MJD How is R regular? –  user34790 Nov 13 '12 at 19:57 Every finite set is regular. math.stackexchange.com/questions/216047/… –  MJD Nov 13 '12 at 20:03 You are looking for regular "within" non-regular. Not an answer, but related, and more amusing is the following. There operations on languages that (however complex the language) yield a regular language. As an example there is the operator "all subsequences" (sometimes called "sparse subwords"). Starting with $\{ a^n b^{n^2}\mid n\ge 1\}$ we get $a^* b^*$. Shallit (in his book second course in formal language theory) starts with the languages of primes in ternary notation $\{ 2,10,12,21,102,111,122,201,212,1002,\dots\}$, and shows this gives the subsequence language $\Sigma^*2\Sigma^* \cup \Sigma^*1\Sigma^*0\Sigma^* \cup \Sigma^*1\Sigma^*1\Sigma^*1\Sigma^*$, $\Sigma = \{0,1,2\}$. Nice, huh? - For example {a^n b^n| n>=0} is a non-regular language, but it has infinite number of subsets that are regular languages: 1) the empty set, 2) {lambda} 3) {ab} 4) {a^2 b^2} . . . and any finite combinations of these languages. -
# ella Luke Plant 9488532 2009-09-12 Luke Plant 5cd7037 2009-10-05 Luke Plant 9488532 2009-09-12 Luke Plant 5cd7037 2009-10-05 Luke Plant 9488532 2009-09-12 Luke Plant 5cd7037 2009-10-05 Luke Plant 9488532 2009-09-12 Luke Plant 5cd7037 2009-10-05 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 Ella ==== Ella is an attempt to create a simple, CGI based web framework in Haskell. It is mainly written to help me learn Haskell, and do so in the real, messy world of HTTP and HTML. I also didn't like the standard Haskell CGI module for various reasons, though, where possible, I reuse its code. The code will evolve as my needs evolve, so I can't guarantee any API compatibility. I don't think that anyone else is using this code but me... Features -------- I've been inspired by Django, but this framework is far less ambitious in scope. It does not include an ORM or anything like that. Some features of Ella, in no particular order: * Explicit request and response objects that are passed around (like Django, and unlike the CGI monad). * Strongly typed extensible dispatching system. This means that you can use a single type signature on a function to cause the URL parsing mechanism to match and parse only data which can be converted to the correct type e.g.:: -- routes definition: routes = [ -- more routes here ... "posts/" <+/> anyParam "edit/" //-> editPost \$ [loginRequired] ] editPost :: Int -> Request -> IO Response The type signature on editPost means that "posts/123/edit/" will be parsed and will pass the integer 123 and the request object to editPost, but "posts/abc/edit/" will not match. anyParam is based on a type class Param, so you can extend it to match your own types. The major downside to this approach is that 'DRY' URL reversing is impossible. You have to create permalink functions that will necessarily duplicate information found in the 'routes' definition. * 'View processors' - inspired by Django middleware, these allow pre and post processing for request handling, which can be installed globally, or on a function by function basis (as in the 'loginRequired' processor in the example above). View processors for signed cookies and CSRF protection are included. * Proper support for character sets. At the moment I've only added defaults for UTF8, but this is already much better than the broken support that the CGI module has (it packs bytestrings directly into Strings without attempting to convert, which works for latin1 only). Output is ByteStrings. Example code ------------ A small but complete example, which I have actually used, is a simple web app for asking people to sign up to a mailing list via clicking on URLs an e-mail. http://bitbucket.org/spookylukey/mailinglistconfirm/src/tip/src/ConfirmCgi.hs A much bigger but incomplete example is my blog code: http://bitbucket.org/spookylukey/haskellblog/ Some failings: -------------- * Assumes use of the IO monad in lots of places - it would be probably be better to generalise to MonadIO or something. * 'DRY' URL reversing impossible * CGI only at the moment. It might be possible to adapt for FastCGI etc, I don't know. * No decent form handling or generating code. I've experimented with defining widgets, and I've used them in real code, but the advantages over raw HTML are not compelling. I have looked at other form handling libraries and haven't found anything that satisfies my needs. 'Formlets' is neat, but far too inflexible and simplistic, especially with the generated HTML.
# How do I handle specific tile/object collisions? What do I do after the bounding box test against a tile to determine whether there is a real collision against the contents of that tile? And if there is, how should I move the object in response to that collision? I have a small object, and test for collisions against the tiles that each corner of it is on. Here's my current code, which I run for each of those (up to) four tiles: // get the bounding box of the object, in world space objectBounds = object->bounds + object->position; if ( (objectBounds.right >= tileBounds.left) && (objectBounds.left <= tileBounds.right) && (objectBounds.top >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) { // perform specific test to see if it's a left, top , bottom // or right collision. If so, I check to see the nature of it // and where I need to place the object to respond to that collision... // [THIS IS THE PART THAT NEEDS WORK] // if( lastkey==keydown[right] && ((objectBounds.right >= tileBounds.left) && (objectBounds.right <= tileBounds.right) && (objectBounds.bottom >= tileBounds.bottom) && (objectBounds.bottom <= tileBounds.top)) ) { object->position.x = tileBounds.left - objectBounds.width; } // etc. • you need some Vector2D on your life... – Gustavo Maciel Jan 26 '12 at 2:27 • "to determine whether there is a real collision against the contents of that tile" So there are objects that live in the space defined by the tile? The tile itself doesn't define the collision? Do these objects have collision boxes of some kind? – Tetrad Jan 26 '12 at 7:15 • "how should I move the object in response to that collision?" I don't know, how do you want it to move? – Tetrad Jan 26 '12 at 7:16 • tetrad im just looking at the tiles.. the tiles define the c ollision – Thomas William Cannady Jan 26 '12 at 12:29 What I do in my 2D engine, is that I define a Polygon type property (called CollisionPolygon) for all classes that represent collidable objects. My Polygon type is basically just a list of lines. (More precisely, line sections. Each line is defined by its 2 end points.) You have to define the edges of the collision polygon of your objects manually once. If you have only a couple of objects to be tested for collision, you can use complex polygons with 100+ edges and it won't affect the performance drastically. If you have hundreds of objects, than you should stick to 4-12 sided polygons, just in case. The next step is to implement the Point-in-Polygon algorithm. Since you didn't state which programming language or framework do you use, here's a sample implementation: https://stackoverflow.com/questions/217578/point-in-polygon-aka-hit-test Then you create another method, using Point-in-Polygon, that tests if two Polygons are colliding (if any of Polygon A's edges is inside Polygon B, then it's a collision). NOTE: You have to do it both ways (it is possible, that A has none of its edges in B, but B has one or more of its edges in A). Also note, that depending on the velocity of the objects, and your game's refresh rate, you might miss some collisions! Imagine two needle-like polygons crossing each other's path at high velocity. At time T1, they have no overlapping areas. At time T2, they intersect, but none of them have edges in the other one. This will not be detected with the Point-in-Polygon method. If this seems to be a problem in your engine, you will have to do some more research. Finally, implement your own way of handling actual collisions. It really depends on what you want to do... In my engine, I always store the previous position of the object, and reset the colliding objects' positions to it whenever a collision occurs. I also calculate the inertia of the two objects (velocity * mass), multiply it with an arbitrary <1 value ("energy loss"), and calculate the new velocities of the colliding objects, using good old primary school physics. I recommend defining an interface for all of these (ICollidable, for example), that contains the following: CollisionPolygon property (type: Polygon), BoundingRectangle property (type: Rectangle), bool CollidesWith(ICollidable otherObj) method. Then your collision detection will be basically this: 1. For every ICollidable on the scene, do the BoundingRectangle test with all the other objects. 2. If two objects' bounding rectangles collide, then do the polygon-based collison test. 3. If the polygon-based test is also positive, then register the collision (perhaps by storing indexes of the two objects). 4. Once all the objects have been tested for collisions, go through the registered collision instances, and do your "OnCollision" stuff. In my engine, this is where I sum up all the inertia changes for a given object, and then apply the changes. This explanation is a working solution, but it's not the most optimal. You can build a mask for the tile and also the object it collides with ... these may be pre-computed along with the bitmap you draw, but sometimes the mask is made up of non-zero pixels. So where a pixel is color 0, you don't considder it to be part of the object or part of the tile. So, by using pixel color 0, you remove the need to build a seperate mask. Now, where a bounding box collision has occured, you want to create a mask-collision box that encompasses both the tile and object (this is in an offscreen buffer). For any pixel where both masks collide ... that is, a pixel is visible on both masks, you have a collision. What to do from here ... you could bounce the object back in the direction it came from, (which makes a collision feel "sticky"), or you could have another mask with collision actions stored in it ... essentially saying "If you collide with this pixel, move in this direction". You can optimise this approach by several ways ... 1) Using a preset color to use as the masks, meaning you don't need to build seperate masks for collision 2) when building the mask collision box, it only needs to encompass either the tile OR the object, and then you draw to that box using clipping, just like when drawing to the screen. 3) If your mask is a bitmap array, you could use the number 1 to indicate that this pixel is collidable, and 0 for not being collidable, and then you only need to perform a LOGIC AND of the two masks together, and any non-zero value means a collision occured. 4) If you are using paletised bitmaps, you could see if you could get away with only 127 colors in your bitmap, and allocate 1 bit in the colors to the mask, thereby combining method 1 and 3. 5) If you have managed to implement methods 2 and 4, you might realise that you don't actually need to draw to a collision box, but can perform inline collision testing.
# difference engine. ```Difference Engine Weierstrass: A machine to compute mathematical tables – Any continuous function can be approximated by a polynomial – Any Polynomial can be computed from difference tables ``` $ int{(n)} = n^{2}+n+41 \ d1(n) = int{(n)} - int{(n-1)} = 2n \ d2(n) = d1{(n)} - d1{(n-1)} = 2 \ $ Make a Table with the equations. You can use Addition and Find the next value of function for a new “n”. source : http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-823-computer-system-architecture-fall-2005/lecture-notes/
# Use IRF540 MOSFET to control LED strip I was hoping to use this mosfet shield to control a "Standard 3528 12V LED strip". The shield looks like this: However, it doesn't come with a datasheet (I guess that's my fault for being cheap) and now I'm confused about how to connect this to my Arduino Nano / Wemos D1. The black connector has +, - and s on the bottom but both blue connectors have no labeling whatsoever. So I'm not sure how to connect it to the 12V and Arduino. I've googled but haven't been able to find any documentation on this shield other than lots of sites offering it and with the same (or similar) pictures but never with any labeling. Maybe these things are so common that 'everybody knows' how to connect them? The MOSFET is an IRF540. Also; I've read somewhere that I may need a transistor or "driver" to get the MOSFET to switch? I have relays lying around and they work fine for my purposes but I was hoping to switch the LED strip without the noticeable click of a relay, so that's why I was looking into this MOSFET. (I also have some solid state relays laying around, but they're for AC). • Buying EE stuff that doesn't have a data sheet isn't being cheap because it's likely you won't get the best performance without reverse engineering it to understand it and this costs more (time is money etc.) than buying the right goods in the first place and saving everyone's time who reads this post. – Andy aka Nov 6 '18 at 16:15 • The sloppy placement of the components is not making me optimistic about this board. – Elliot Alderson Nov 6 '18 at 16:24 • That power mosfet is literally touching the connector to the microcontroller and defeating the protection of the optocoupler... I would advise not using this board, it looks like something designed by a beginner who didn't know what they were doing. – Hearth Nov 6 '18 at 17:17 • Get in touch with the supplier and ask for documentation rather than just complain about the lack. I'm sure the supplier has had to deal with question on the connections if they are not marked. The board is however simple enough you could trace the circuit easily. – Jack Creasey Nov 6 '18 at 17:19 • I pointed out myself I probably shouldn't have so cheap; no need to point it out again and it's not helpful in any way. I know the board doesn't look great in the photo; my actual board(s, I got 4...) look much better. I can try to contact the seller but I have a feeling that's not going to get me far (again: shouldn't have been so cheap). HOWEVER; this is all I currently got lying around. I don't have drawers and drawers full of this stuff. Is there anyone that can help me figure this thing out (bootleg or not), even if it's only to see if I can get it to work? @Felthry: I got 2 of those too… – RobIII Nov 6 '18 at 17:25
1. ## Modular Arithmatic Find the value of x: (35-207)= x mod 11 Can someone show the steps for this answer? I just don't know what to do after I get -172. 2. Originally Posted by nippy Find the value of x: (35-207)= x mod 11 Can someone show the steps for this answer? I just don't know what to do after I get -172. $(35 - 207) \equiv x~\text{mod 11}$ $-172 \equiv x~\text{mod 11}$ So x is the remainder when you divide -172 by 11, ie. it is the solution of the equation: $-172 = 11 \cdot n + x$ where n is an integer and x is defined to be $0 \leq x < 11$. I get n = -16 as giving x in the required interval, so $4 \equiv x~\text{mod 11}$ -Dan
# How can I graph the amplitude for a given frequency window over time? I have simple audio files of single plucked notes from a guitar: I want to graph the amplitude of each partials over time, rather than just have this color-coded "dB" intensity graphing I have here. I can delete the extraneous frequencies in Izotope RX easily enough which I think does the same thing as FFT windowing: Is this the right approach? How can I next get the amplitude of this window over time? I know basic C++/JUCE but mostly for synth design. I'm not a pro on DSP, FFT, etc. I appreciate any guidance. In general, I would: - Compute the spectrogram (STFT) of the signal you're interested in. - Perform any required scaling to get a good amplitude estimate. - Apply some thresholding to treat the noise floor. - Devise a scheme for identifying the amplitude maxima of the harmonics you're interested in. This final point will be much easier if you know the fundamental. For example, if you're playing a note at 440Hz (A440), and you know you only want to examine the fundamental, second, and third harmonics, you can easily define some brackets to search within. If you don't know the fundamental, you will have to implement some method for determining what the fundamental is. However, assuming you do know the fundamental, I've provided a MATLAB example. This example first builds an arbitrary signal with some sinusoids that decay at differing rates, then analyses the signal to identify harmonic amplitudes over time. %% Generate a signal with harmonics. % Build fundamental and harmonic signals. samplePeriod = 0.001; time = 0:samplePeriod:5; % [s] harmonics = [1, 3, 5, 7]; fundamentalFreq = 50; % [Hz] componentFreqs = fundamentalFreq .* harmonics; % [Hz] componentFreqs = componentFreqs .* 2 .* pi; % [rad/s] componentFreqs = repmat(componentFreqs', 1, numel(time)); componentSignals = sin(componentFreqs .* repmat(time, numel(harmonics), 1)); % Apply decay to the harmonics. decay = [0, 0.1, 0.5, 1]; harmonicAmplitude = zeros(numel(harmonics), numel(time)); for iHarmonic = 1 : numel(harmonics) harmonicAmplitude(iHarmonic, :) = exp(-1 * decay(iHarmonic) * time); componentSignals(iHarmonic, :) = componentSignals(iHarmonic, :) .* harmonicAmplitude(iHarmonic, :) .* 1; end % Generate final signal. signal = sum(componentSignals, 1); %% Perform time-frequency analysis. sampleFrequency = 1 / samplePeriod; % Compute the spectrogram. NFFT = 2^12; winDuration = 0.1; % [s] winLength = winDuration * sampleFrequency; winFactor = mean(hamming(winLength)); [S, freq, specTime, p] = spectrogram(signal, winLength, [], NFFT, sampleFrequency); % Compute FFT amplitude. amplitude = abs(S) .* 2 ./ winLength ./ winFactor; % Threshold. floorLevel = max(max(amplitude)) * 0.1; amplitude(amplitude < floorLevel) = 0; % Determine the amplitude of the harmonics of interest. fundamentalFreq = 50; % [Repeat of above] harmonicsOfInterest = 1 : 10; harmonicSearchWidth = 5; % [Hz] harmonicBracketLower = harmonicsOfInterest .* fundamentalFreq - harmonicSearchWidth / 2; harmonicBracketUpper = harmonicsOfInterest .* fundamentalFreq + harmonicSearchWidth / 2; % Find the amplitude maxima for each harmonic at each point in time. estimatedHarmonicAmplitude = zeros(numel(specTime), numel(harmonicsOfInterest)); for iHarmonic = 1 : numel(harmonicsOfInterest) activeBracket = freq >= harmonicBracketLower(iHarmonic) & freq <= harmonicBracketUpper(iHarmonic); bracketAmplitude = amplitude(activeBracket, :); estimatedHarmonicAmplitude(:, iHarmonic) = max(bracketAmplitude, [], 1)'; end % Visualise output. figure; surf(specTime, freq, amplitude, 'EdgeColor', 'none'); xlabel('Time [s]'); ylabel('Frequency [Hz]'); zlabel('Amplitude'); view(2); figure; plot(specTime, estimatedHarmonicAmplitude); xlabel('Time [s]'); ylabel('Amplitude'); box on; grid on; legendBuilder = @(a) ['Harmonic: ', num2str(a, '%1.0f')]; for iHarmonic = 1 : numel(harmonicsOfInterest) legendString{iHarmonic} = legendBuilder(harmonicsOfInterest(iHarmonic)); end legend(legendString); I would save the filtered audio file as a dat file using the command line SoX application. This transforms the audio to time vs. amplitude values, which can then simply be displayed as a graph. • "dat" is not a file format. You probably mean "raw audio samples", or something similar. – Marcus Müller Jun 22 at 18:20 • I did not claim it to be. Dat files are used for exactly the purpose the OP of the question wants to use it for. Dat files can't be played obviously, only further processed. I use them often for this. – Petoetje59 Jun 23 at 14:23
# subset question • October 3rd 2011, 08:07 AM hmmmm subset question If I have that A is an open interval the from the definition I have that $(a-r,a+r)\subset A\subset\mathbb{R}$ can I say that $a+r\in A$? thanks for any help ps sorry about the first post I sent it from my phone and clearly something went wrong sorry • October 3rd 2011, 08:09 AM Plato Re: subset question Quote: Originally Posted by hmmmm If I have the following $(-a,a)\subset A\subset\mathbb{R}$ can I say that $a\in A$? It could be that $(-2a,a)=A$. • October 3rd 2011, 09:05 AM hmmmm Re: subset question sorry about the mistake I made above edited now • October 3rd 2011, 09:15 AM Plato Re: subset question Quote: Originally Posted by hmmmm sorry about the mistake I made above edited now No again. Consider $(-2,2)=A$ now clearly $(1-1,1+1)\subset A$ right? Does $1+1\in A~?$ • October 3rd 2011, 11:31 AM hmmmm Re: subset question yeah it isn't, however we can always choose an r so that it is? • October 3rd 2011, 12:08 PM Plato Re: subset question Quote: Originally Posted by hmmmm yeah it isn't, however we can always choose an r so that it is? Look this is a waste of time. We have no idea where your going with this thread. Post a problem. A complete well formed problem. Stop making us guess as to what you are trying to ask. • October 3rd 2011, 03:19 PM hmmmm Re: subset question If A is an open subset then $supA\not\in A$ proof Assume that $\alpha=sup A\in A$. Then as A is open from the definition we have that for $\alpha\in A\ \mbox{and}\ r>0$ $(\alpha-r,\alpha+r)\subset A$. Contradiction- as we assumed that $\alpha=Sup A$ but $\alpha+r>\alpha\ \mbox{and}\ \alpha+r\in A$. I was asking if I can always choose an r such that $\alpha+r\in A$? Thanks for any help • October 3rd 2011, 03:32 PM Plato Re: subset question Quote: Originally Posted by hmmmm If A is an open subset then $supA\not\in A$proof Assume that $\alpha=sup A\in A$. Then as A is open from the definition we have that for $\alpha\in A\ \mbox{and}\ r>0$ $(\alpha-r,\alpha+r)\subset A$. Contradiction- as we assumed that $\alpha=Sup A$ but $\alpha+r>\alpha\ \mbox{and}\ \alpha+r\in A$. I was asking if I can always choose an r such that $\alpha+r\in A$? Well thank you for finally posting an intelligible problem. It is well know that an open set in $\mathbb{R}$ cannot contain a maximal element. Recall that between any two real numbers there is a real number. • October 4th 2011, 01:45 AM hmmmm Re: subset question Is my proof of this statement then correct, specifically where I derived the contradiction and said $\alpha+r\in A$? • October 4th 2011, 03:05 AM Plato Re: subset question Quote: Originally Posted by hmmmm Is my proof of this statement then correct, specifically where I derived the contradiction and said $\alpha+r\in A$? No you cannot say that. But you can say $\left( {\exists s} \right)\left[ {\alpha < s < r} \right]$ thus $s\in A~.$
# Finding all partitions of a grid into k connected components I am working on floor planing on small orthogonal grids. I want to partition a given $$m \times n$$ grid into $$k$$ (where $$k \leq nm$$, but usually $$k \ll nm$$) connected components in all possible ways so that I can compute a fitness value for each solution and pick the best one. So far, I have the fitness evaluation at the end of the algorithm, no branch-and-bound or other type of early-termination, since the fitness computation requires the complete solution. My current approach to listing all possible grid partitions into connected components is quite straight forward and I am wondering what optimizations can be added to avoid listing duplicate partitions? There must be a better way than what I have right now. I know the problem is NP, but I would at like to push my algorithm from brute-force to a smart and efficient approach. ## Overview For better visualization and description I will reformulate the task to an equivalent one: paint the grid cells using $$k$$ colors so that each color builds a single connected component (with respect to 4-neighborhood) and of course all grid is completely painted. My approach so far: 1. Generate all seed scenarios. A seed scenario is a partial solution where each color is applied to a single cell only, the remaining cells are yet empty. 2. Collect all possible solutions for each seed scenario by expanding the color regions in a DFS manner. 3. Filter out duplicate solutions with help of a hash-table. ## Seed scenarios I generate the seed scenarios as permutations of $$k$$ unique colors and $$mn-k$$ void elements (without repetition of the voids). Hence, the total number is $$(nm)! / (mn-k)!$$ For example, for a $$1 \times 4$$ grid and colors $${0, 1}$$ with void denoted as $$\square$$ the seed scenarios are: • $$[0 1 \square \square]$$ • $$[0 \square 1 \square]$$ • $$[0 \square \square 1]$$ • $$[1 0 \square \square]$$ • $$[1 \square 0 \square]$$ • $$[1 \square \square 0]$$ • $$[\square 0 1 \square]$$ • $$[\square 0 \square 1]$$ • $$[\square 1 0 \square]$$ • $$[\square 1 \square 0]$$ • $$[\square \square 0 1]$$ • $$[\square \square 1 0]$$ ## Seed growth / multicolor flood-fill I assume the painting to be performed in a fixed ordering of the colors. The seed scenario always comes with the first color set as the current one. New solutions are generated then either by switching to the next color or by painting empty cells by the current color. //PSEUDOCODE buffer.push(seed_scenario with current_color:=0); while(buffer not empty) { partial_solution := buffer.pop(); if (partial_solution.available_cells.count == 0) result.add(partial_solution); else { buffer.push(partial_solution.nextColor()); //copy solution and increment color buffer.pushAll(partial_solution.expand()); //kind-of flood-fill produces new solutions } } partial_solution.expand() generates a number of new partial solutions. All of them have one additional cell colored by the current color. It examines the current region boundary and tries to paint each neighboring cell by the current color, if the cell is still void. partial_solution.nextColor() duplicates the current partial solution but increments the current painting color. This simple seed growth enumerates all possible solutions for the seed setup. However, a pair of different seed scenarios can produce identical solutions. There are indeed many duplicates produced. So far, I do not know how to take care of that. So I had to add the third step that filters duplicates so that the result contains only distinct solutions. ## Question I assume there should be a way to get rid of the duplicates, since that is where the efficiency suffers the most. Is it possible to merge the seeds generation with the painting stage? I started to thing about some sort of dynamic programming, but I have no clear idea yet. In 1D it would be much easier, but the 4-connectivity in a 2D grid makes the problem much harder. I tried searching for solutions or publications, but didn’t find anything useful yet. Maybe I am throwing in the wrong keywords. So any suggestions to my approach or pointers to literature are very much appreciated! # Note I found Grid Puzzle Split Algorithm, but not sure if the answers can be adapted to my problem.
Mar 1, 2000 ### Abstract: The documents concerning the Cuban missile crisis, declassified by the Office of the Historian of the U.S. Department of State, reveal quite effectively a key theme in the conduct of U.S. foreign policy in 1962–63: Cuba's bizarre role within the context of U.S. government decision making. This role had somewhat contradictory dimensions. Cuba seemed to be both an afterthought and an obsession for U.S. decision makers. Its exclusion from the diplomatic negotiations over the missile crisis was an instance of negligence, though it came about in part from a deliberate decision. Such Cuban exclusion reduced the likelihood that the United States could accomplish all of its goals in the missile crisis settlement. Moreover, information included in these documents only to some degree (or not at all) calls attention to Cuba's much greater substantive importance before and during the missile crisis than U.S. officials thought at the time. This documentary record, therefore, reminds us that the outcome of the missile crisis was so positive for the United States, to a significant degree, thanks to Soviet statesmanship in managing and controlling its unhappy Cuban ally. Publisher's Version Last updated on 01/04/2017
# Debye–Hückel equation Distribution of ions in a solution The chemists Peter Debye and Erich Hückel noticed that solutions that contain ionic solutes do not behave ideally even at very low concentrations. So, while the concentration of the solutes is fundamental to the calculation of the dynamics of a solution, they theorized that an extra factor that they termed gamma is necessary to the calculation of the activity coefficients of the solution. Hence they developed the Debye–Hückel equation and Debye–Hückel limiting law. The activity is only proportional to the concentration and is altered by a factor known as the activity coefficient ${\displaystyle \gamma }$. This factor takes into account the interaction energy of ions in solution. ## Debye–Hückel limiting law In order to calculate the activity ${\displaystyle a_{C}}$  of an ion C in a solution, one must know the concentration and the activity coefficient: ${\displaystyle a_{C}=\gamma {\frac {[C]}{[C^{\ominus }]}},}$ where ${\displaystyle \gamma }$  is the activity coefficient of C, ${\displaystyle [C^{\ominus }]}$  is the concentration of the chosen standard state, e.g. 1 mol/kg if molality is used, ${\displaystyle [C]}$  is a measure of the concentration of C. Dividing ${\displaystyle [C]}$  with ${\displaystyle [C^{\ominus }]}$  gives a dimensionless quantity. The Debye–Hückel limiting law enables one to determine the activity coefficient of an ion in a dilute solution of known ionic strength. The equation is[1]:section 2.5.2 ${\displaystyle \ln(\gamma _{i})=-{\frac {z_{i}^{2}q^{2}\kappa }{8\pi \varepsilon _{r}\varepsilon _{0}k_{B}T}}=-{\frac {z_{i}^{2}q^{3}N_{\text{A}}^{1/2}}{4\pi (\varepsilon _{r}\varepsilon _{0}k_{B}T)^{3/2}}}{\sqrt {10^{3}{\frac {I}{2}}}}=-Az_{i}^{2}{\sqrt {I}},}$ where ${\displaystyle z_{i}}$  is the charge number of ion species i, ${\displaystyle q}$  is the elementary charge, ${\displaystyle \kappa }$  is the inverse of the Debye screening length (defined below), ${\displaystyle \varepsilon _{r}}$  is the relative permittivity of the solvent, ${\displaystyle \varepsilon _{0}}$  is the permittivity of free space, ${\displaystyle k_{B}}$  is Boltzmann's constant, ${\displaystyle T}$  is the temperature of the solution, ${\displaystyle N_{\mathrm {A} }}$  is Avogadro's number, ${\displaystyle I}$  is the ionic strength of the solution (defined below), ${\displaystyle A}$  is a constant that depends on temperature. If ${\displaystyle I}$  is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for ${\displaystyle A}$  of water is ${\displaystyle 1.172{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}}$  at 25 °C. It is common to use a base-10 logarithm, in which case we factor ${\displaystyle \ln 10}$ , so A is ${\displaystyle 0.509{\text{ mol}}^{-1/2}{\text{kg}}^{1/2}}$ . It is important to note that because the ions in the solution act together, the activity coefficient obtained from this equation is actually a mean activity coefficient. The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units:[1] ${\displaystyle P^{\text{ex}}=-{\frac {k_{B}T\kappa _{\text{cgs}}^{3}}{24\pi }}=-{\frac {k_{B}T\left({\frac {4\pi \sum _{j}c_{j}q_{j}}{\epsilon _{0}\epsilon _{r}k_{B}T}}\right)^{3/2}}{24\pi }}.}$ Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure ${\displaystyle P^{\text{id}}=k_{B}T\sum _{i}c_{i}}$ . The osmotic coefficient is then given by ${\displaystyle \phi ={\frac {P^{\text{id}}+P^{\text{ex}}}{P^{\text{id}}}}=1+{\frac {P^{\text{ex}}}{P^{\text{id}}}}.}$ ## Summary of Debye and Hückel's first article on the theory of dilute electrolytes The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal Physikalische Zeitschrift. An English translation[2]:217–63 of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954".[2]:xv The article deals with the calculation of properties of electrolyte solutions that are not under the influence of net electric fields, thus it deals with electrostatics. In the same year they first published this article, Debye and Hückel, hereinafter D&H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here. In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article. ### Introduction D&H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is[2]:221 ${\displaystyle \prod _{i=1}^{s}x_{i}^{\nu _{i}}=K,}$ where ${\displaystyle \textstyle \prod }$  is a notation for multiplication, ${\displaystyle i}$  is a dummy variable indicating the species, ${\displaystyle s}$  is the number of species participating in the reaction, ${\displaystyle x_{i}}$  is the mole fraction of species ${\displaystyle i}$ , ${\displaystyle \nu _{i}}$  is the stoichiometric coefficient of species ${\displaystyle i}$ , K is the equilibrium constant. D&H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing ${\displaystyle K}$  with ${\displaystyle \gamma K}$ , where ${\displaystyle \gamma }$  is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry as of 2007. The relationship between ${\displaystyle \gamma }$  and the special activity coefficients ${\displaystyle \gamma _{i}}$  is[2]:248 ${\displaystyle \log(\gamma )=\sum _{i=1}^{s}\nu _{i}\log(\gamma _{i}).}$ ### Fundamentals D&H use the Helmholtz and Gibbs free entropies ${\displaystyle \Phi }$  and ${\displaystyle \Xi }$  to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms: ${\displaystyle \Phi =S-{\frac {U}{T}}=-{\frac {A}{T}},}$ where ${\displaystyle \Phi }$  is Helmholtz free entropy, ${\displaystyle S}$  is entropy, ${\displaystyle U}$  is internal energy, ${\displaystyle T}$  is temperature, ${\displaystyle A}$  is Helmholtz free energy. D&H give the total differential of ${\displaystyle \Phi }$  as[2]:222 ${\displaystyle d\Phi ={\frac {P}{T}}\,dV+{\frac {U}{T^{2}}}\,dT,}$ where ${\displaystyle P}$  is pressure, ${\displaystyle V}$  is volume. By the definition of the total differential, this means that ${\displaystyle {\frac {P}{T}}={\frac {\partial \Phi }{\partial V}},}$ ${\displaystyle {\frac {U}{T^{2}}}={\frac {\partial \Phi }{\partial T}},}$ which are useful further on. As stated previously, the internal energy is divided into two parts:[2]:222 ${\displaystyle U=U_{k}+U_{e}}$ where ${\displaystyle k}$  indicates the classical part, ${\displaystyle e}$  indicates the electric part. Similarly, the Helmholtz free entropy is also divided into two parts: ${\displaystyle \Phi =\Phi _{k}+\Phi _{e}.}$ D&H state, without giving the logic, that[2]:222 ${\displaystyle \Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.}$ It would seem that, without some justification, ${\displaystyle \Phi _{e}=\int {\frac {P_{e}}{T}}\,dV+\int {\frac {U_{e}}{T^{2}}}\,dT.}$ Without mentioning it specifically, D&H later give what might be the required (above) justification while arguing that ${\displaystyle \Phi _{e}=\Xi _{e}}$ , an assumption that the solvent is incompressible. The definition of the Gibbs free entropy ${\displaystyle \Xi }$  is[2]:222–3 ${\displaystyle \Xi =S-{\frac {U+PV}{T}}=\Phi -{\frac {PV}{T}}=-{\frac {G}{T}},}$ where ${\displaystyle G}$  is Gibbs free energy. D&H give the total differential of ${\displaystyle \Xi }$  as[2]:222 ${\displaystyle d\Xi =-{\frac {V}{T}}\,dP+{\frac {U+PV}{T^{2}}}\,dT.}$ At this point D&H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure ${\displaystyle P_{e}}$  amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing[2]:223 ${\displaystyle \Xi =\Xi _{k}+\Xi _{e},}$ and put ${\displaystyle \Xi _{e}=\Phi _{e}=\int {\frac {U_{e}}{T^{2}}}\,dT.}$ D&H say that, according to Planck, the classical part of the Gibbs free entropy is[2]:223 ${\displaystyle \Xi _{k}=\sum _{i=0}^{s}N_{i}(\xi _{i}-k_{B}ln(x_{i})),}$ where ${\displaystyle i}$  is a species, ${\displaystyle s}$  is the number of different particle types in solution, ${\displaystyle N_{i}}$  is the number of particles of species i, ${\displaystyle \xi _{i}}$  is the particle specific Gibbs free entropy of species i, ${\displaystyle k_{B}}$  is Boltzmann's constant, ${\displaystyle x_{i}}$  is the mole fraction of species i. Species zero is the solvent. The definition of ${\displaystyle \xi _{i}}$  is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties:[2]:223 ${\displaystyle \xi _{i}=s_{i}-{\frac {u_{i}+Pv_{i}}{T}}.}$ D&H don't say so, but the functional form for ${\displaystyle \Xi _{k}}$  may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction.[3] D&H note that the internal energy ${\displaystyle U}$  of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa.[2]:225 ### The potential energy of an arbitrary ion solution Electroneutrality of a solution requires that[2]:233 ${\displaystyle \sum _{i=1}^{s}N_{i}z_{i}=0,}$ where ${\displaystyle N_{i}}$  is the total number of ions of species i in the solution, ${\displaystyle z_{i}}$  is the charge number of species i. To bring an ion of species i, initially far away, to a point ${\displaystyle P}$  within the ion cloud requires interaction energy in the amount of ${\displaystyle z_{i}q\varphi }$ , where ${\displaystyle q}$  is the elementary charge, and ${\displaystyle \varphi }$  is the value of the scalar electric potential field at ${\displaystyle P}$ . If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities ${\displaystyle n_{i}}$  are altered from their bulk (overall average) values ${\displaystyle n_{i}^{0}}$  by the corresponding Boltzmann factor ${\displaystyle e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}}$ , where ${\displaystyle k_{B}}$  is the Boltzmann constant, and ${\displaystyle T}$  is the temperature.[4] Thus at every point in the cloud[2]:233 ${\displaystyle n_{i}={\frac {N_{i}}{V}}e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}=n_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}.}$ Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions.[2]:227 The charge density is related to the number density:[2]:233 ${\displaystyle \rho =\sum _{i}z_{i}qn_{i}=\sum _{i}z_{i}qn_{i}^{0}e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}.}$ When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results:[2]:233 ${\displaystyle \nabla ^{2}\varphi =-{\frac {\rho }{\varepsilon _{r}\varepsilon _{0}}}=-\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}.}$ This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physicical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory, which resorted to Taylor series expansion. However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used (${\displaystyle e^{x}\approx 1+x}$  for ${\displaystyle 0 ) to create a linear differential equation (Hamann, Hamnett, and Vielstich. Electrochemistry. Wiley-VCH. section 2.4.2). D&H say that this approximation holds at large distances between ions,[2]:227 which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution.[2]:227 Thus ${\displaystyle -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}e^{-{\frac {z_{i}q\varphi }{k_{B}T}}}\approx -\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}\left(1-{\frac {z_{i}q\varphi }{k_{B}T}}\right)=-\left(\sum _{i}{\frac {z_{i}qn_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}}}-\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{B}T}}\right).}$ The Poisson–Boltzmann equation is transformed to[2]:233 ${\displaystyle \nabla ^{2}\varphi =\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}\varphi }{\varepsilon _{r}\varepsilon _{0}k_{B}T}},}$ because the first summation is zero due to electroneutrality.[2]:234 Factor out the scalar potential and assign the leftovers, which are constant, to ${\displaystyle \kappa ^{2}}$ . Also, let ${\displaystyle I}$  be the ionic strength of the solution:[2]:234 ${\displaystyle \kappa ^{2}=\sum _{i}{\frac {z_{i}^{2}q^{2}n_{i}^{0}}{\varepsilon _{r}\varepsilon _{0}k_{B}T}}={\frac {2Iq^{2}}{\varepsilon _{r}\varepsilon _{0}k_{B}T}},}$ ${\displaystyle I={\frac {1}{2}}\sum _{i}z_{i}^{2}n_{i}^{0}.}$ So, the fundamental equation is reduced to a form of the Helmholtz equation:[5] ${\displaystyle \nabla ^{2}\varphi =\kappa ^{2}\varphi .}$ Today, ${\displaystyle \kappa ^{-1}}$  is called the Debye screening length. D&H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type.[2]:229 The equation may be expressed in spherical coordinates by taking ${\displaystyle r=0}$  at some arbitrary ion:[6][2]:229 ${\displaystyle \nabla ^{2}\varphi ={\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}\left(r^{2}{\frac {\partial \varphi (r)}{\partial r}}\right)={\frac {\partial ^{2}\varphi (r)}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial \varphi (r)}{\partial r}}=\kappa ^{2}\varphi (r).}$ The equation has the following general solution (keep in mind that ${\displaystyle \kappa }$  is a positive constant):[2]:229 ${\displaystyle \varphi (r)=A{\frac {e^{-{\sqrt {\kappa ^{2}}}r}}{r}}+A'{\frac {e^{{\sqrt {\kappa ^{2}}}r}}{2r{\sqrt {\kappa ^{2}}}}}=A{\frac {e^{-\kappa r}}{r}}+A''{\frac {e^{\kappa r}}{r}}=A{\frac {e^{-\kappa r}}{r}},}$ where ${\displaystyle A}$ , ${\displaystyle A'}$ , and ${\displaystyle A''}$  are undetermined constants The electric potential is zero at infinity by definition, so ${\displaystyle A''}$  must be zero.[2]:229 In the next step, D&H assume that there is a certain radius ${\displaystyle a_{i}}$ , beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius ${\displaystyle a_{i}}$ .[2]:231 The potential of a point charge by itself is ${\displaystyle \varphi _{\text{pc}}(r)={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}.}$ D&H say that the total potential inside the sphere is[2]:232 ${\displaystyle \varphi _{\text{sp}}(r)=\varphi _{\text{pc}}(r)+B_{i}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{r}}+B_{i},}$ where ${\displaystyle B_{i}}$  is a constant that represents the potential added by the ionic atmosphere. No justification for ${\displaystyle B_{i}}$  being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge).[7] Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius ${\displaystyle r}$ , it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius ${\displaystyle a_{i}}$ , the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition). In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius ${\displaystyle a_{i}}$ , there is a continuity of ${\displaystyle \varphi (r)}$  and its first derivative. Thus[2]:232 ${\displaystyle \varphi (a_{i})=A_{i}{\frac {e^{-\kappa a_{i}}}{a_{i}}}={\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}}}+B_{i}=\varphi _{\text{sp}}(a_{i}),}$ ${\displaystyle \varphi '(a_{i})=-{\frac {A_{i}e^{-\kappa a_{i}}(1+\kappa a_{i})}{a_{i}^{2}}}=-{\frac {1}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {z_{i}q}{a_{i}^{2}}}=\varphi _{\text{sp}}'(a_{i}),}$ ${\displaystyle A_{i}={\frac {z_{i}q}{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {e^{\kappa a_{i}}}{1+\kappa a_{i}}},}$ ${\displaystyle B_{i}=-{\frac {z_{i}q\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}$ By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is[2]:230 & 232 ${\displaystyle u_{i}=z_{i}qB_{i}=-{\frac {z_{i}^{2}q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}$ Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions. To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy:[2]:230 & 232 ${\displaystyle U_{e}={\frac {1}{2}}\sum _{i=1}^{s}N_{i}u_{i}=-\sum _{i=1}^{s}{\frac {N_{i}z_{i}^{2}}{2}}{\frac {q^{2}\kappa }{4\pi \varepsilon _{r}\varepsilon _{0}}}{\frac {1}{1+\kappa a_{i}}}.}$ ## Nondimensionalization This section was created without reference to the original paper and there are some errors in it (for instance, the ionic strength is off by a factor of two). Once these are rectified, this section should probably be moved to the nondimensionalization article and then be linked from here, since the nondimensional version of the Poisson–Boltzmann equation isn't necessary to understand the D&H theory. The differential equation is ready for solution (as stated above, the equation only holds for low concentrations): ${\displaystyle {\frac {\partial ^{2}\varphi (r)}{\partial r^{2}}}+{\frac {2}{r}}{\frac {\partial \varphi (r)}{\partial r}}={\frac {Iq\varphi (r)}{\varepsilon _{r}\varepsilon _{0}k_{b}T}}=\kappa ^{2}\varphi (r).}$ Using the Buckingham π theorem on this problem results in the following dimensionless groups: ${\displaystyle \pi _{1}={\frac {q\varphi (r)}{k_{b}T}}=\Phi (R(r))}$ ${\displaystyle \pi _{2}=\varepsilon _{r}}$ ${\displaystyle \pi _{3}={\frac {ak_{b}T\varepsilon _{0}}{q^{2}}}}$ ${\displaystyle \pi _{4}=a^{3}I}$ ${\displaystyle \pi _{5}=z_{0}}$ ${\displaystyle \pi _{6}={\frac {r}{a}}=R(r).}$ ${\displaystyle \Phi }$  is called the reduced scalar electric potential field. ${\displaystyle R}$  is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length, ${\displaystyle (\kappa a)^{2}}$ . The second could be called the reduced central ion charge, ${\displaystyle Z_{0}}$  (with a capital Z). Note that, though ${\displaystyle z_{0}}$  is already dimensionless, without the substitution given below, the differential equation would still be dimensional. ${\displaystyle {\frac {\pi _{4}}{\pi _{2}\pi _{3}}}={\frac {a^{2}q^{2}I}{\varepsilon _{r}\varepsilon _{0}k_{b}T}}=(\kappa a)^{2}}$ ${\displaystyle {\frac {\pi _{5}}{\pi _{2}\pi _{3}}}={\frac {z_{0}q^{2}}{4\pi a\varepsilon _{r}\varepsilon _{0}k_{b}T}}=Z_{0}}$ To obtain the nondimensionalized differential equation and initial conditions, use the ${\displaystyle \pi }$  groups to eliminate ${\displaystyle \varphi (r)}$  in favor of ${\displaystyle \Phi (R(r))}$ , then eliminate ${\displaystyle R(r)}$  in favor of ${\displaystyle r}$  while carrying out the chain rule and substituting ${\displaystyle {R^{\prime }}(r)=a}$ , then eliminate ${\displaystyle r}$  in favor of ${\displaystyle R}$  (no chain rule needed), then eliminate ${\displaystyle I}$  in favor of ${\displaystyle (\kappa a)^{2}}$ , then eliminate ${\displaystyle z_{0}}$  in favor of ${\displaystyle Z_{0}}$ . The resulting equations are as follows: ${\displaystyle {\frac {\partial \Phi (R)}{\partial R}}{\bigg |}_{R=1}=-Z_{0}}$ ${\displaystyle \Phi (\infty )=0}$ ${\displaystyle {\frac {\partial ^{2}\Phi (R)}{\partial R^{2}}}+{\frac {2}{R}}{\frac {\partial \Phi (R)}{\partial R}}=(\kappa a)^{2}\Phi (R).}$ For table salt in 0.01 M solution at 25 °C, a typical value of ${\displaystyle (\kappa a)^{2}}$  is 0.0005636, while a typical value of ${\displaystyle Z_{0}}$  is 7.017, highlighting the fact that, in low concentrations, ${\displaystyle (\kappa a)^{2}}$  is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however. ## Experimental verification of the theory To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions. Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method). Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive. It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born. ## Extensions of the theory Warning: The notation in this section is (currently) different from in the rest of the article. A number of approaches have been proposed to extend the validity of the law to concentration ranges as commonly encountered in chemistry One such extended Debye–Hückel equation is given by: ${\displaystyle \ -\log _{10}(\gamma )={\frac {A|z_{+}z_{-}|{\sqrt {I}}}{1+Ba{\sqrt {I}}}}\,}$ where ${\displaystyle \ \gamma \,}$  as its common logarithm is the activity coefficient, ${\displaystyle \ z\,}$  is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.), ${\displaystyle \ I\,}$  is the ionic strength of the aqueous solution, and ${\displaystyle \ a\,}$  is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+,Cl,CN, and HCOO. The effective diameter for the hydronium ion is 9Å. ${\displaystyle \ A\,}$  and ${\displaystyle \ B\,}$  are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water [2]. The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength. The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%).
## Electrical characterization of sputter-deposited ZnO coatings on optical fibers Piezoelectric ZnO coatings on optical fibers are of interest for active optical-fiber devices such as phase and wavelength modulators. Reactive magnetron sputtering has been used to prepare high-resistivity [001] radially oriented ZnO coatings on 85 mm long sections of Cr/Au-coated optical fibers. The impedance spectra of 2 and 6 mm long transducers are analyzed between 1 kHz and 100 MHz by applying an electrical potential across the thickness of the ZnO coating. The capacitance of these devices exhibits a logarithmic frequency dispersion and a nearly constant dielectric loss of 0.006 +/- 0.002 between 1 and 100 kHz. Two radial-mode piezoelectric resonances, the first at approximately 22 MHz and the second at 66 MHz, are identified. The thickness distribution of the ZnO coating, which results from the magnetron sputter-deposition process, introduces parabolic dependencies of the capacitance and resonance frequencies of elements placed at different positions along the length of the fiber. Identification of the radial-mode resonances and the effects of ZnO thickness gradients on the piezoelectric resonances are made possible by the occurrence of the ZnO thickness distribution. Thickness-induced changes of the piezoelectric resonance frequencies also allow the observation of an 'inversion' of the resonance response for resonances that occur above the LCR resonance. (C) 1997 Elsevier Science S.A. Published in: Sensors and Actuators a-Physical, 63, 2, 153-160 Year: 1997 ISSN: 0924-4247 Keywords: Note: Fox, GR Ecole Polytech Fed Lausanne,Lab Ceram,Ch-1015 Lausanne,Switzerland Ya752 Times Cited:2 Cited References Count:14 Laboratories:
# Create 3x3 Matrix Create 3-by-3 matrix from nine input values ## Library Utilities/Math Operations ## Description The Create 3x3 Matrix block creates a 3-by-3 matrix from nine input values where each input corresponds to an element of the matrix. The output matrix has the form of `$A=\left[\begin{array}{ccc}{A}_{11}& {A}_{12}& {A}_{13}\\ {A}_{21}& {A}_{22}& {A}_{23}\\ {A}_{31}& {A}_{32}& {A}_{33}\end{array}\right]$` ## Inputs and Outputs InputDimension TypeDescription First Contains the first row and first column of the matrix. Second Contains the first row and second column of the matrix. Third Contains the first row and third column of the matrix. Fourth Contains the second row and first column of the matrix. Fifth Contains the second row and second column of the matrix. Sixth Contains the second row and third column of the matrix. Seventh Contains the third row and first column of the matrix. Eight Contains the third row and second column of the matrix. Ninth Contains the third row and third column of the matrix. OutputDimension TypeDescription First3-by-3 matrixContains the matrix.
# How can one show that an ElGamal-like signature verification scheme is valid? For an ElGamal-like signature scheme, I am given two things: 1. The signing function, 2. the verification function. How can I show that the verification function is valid? Example 1: Signing: $s := x^{-1}(m - k·r) \pmod {p - 1}$ Verification: $g^m = (g^x)^s · r^r \pmod p$ NOTES: $g$ is the generator over the set $\mathbb Z^*_p$. $x$ is the secret value, $g^x$ the public one. $r := g^k \pmod p$ - Your scheme is not the original ElGamal scheme, and I doubt that it is valid at all. First, ElGamal uses a hash of the message where you are using the message itself (which will make the scheme less secure, and only applicable for short messages, but verification should still work). Second, in the exponents of the verification you switched $s$ and $r$. Is this a typo (and you want actually a proof for the original ElGamal), or do you really search a proof for this modified scheme? –  Paŭlo Ebermann Nov 27 '11 at 13:23 Your scheme is not the "true" ElGamal signature scheme: you swapped $x$ and $k$. I assume that $m$ is the hash of the message to sign, not the message itself. Your scheme is sound, which means that the verification algorithm will return "ok" for a signature which has been generated as you suggest. To see that, remember Fermat's Little Theorem which says that for a prime $p$ and every integer $a$ (not multiple of $p$), we have $a^{p-1} = 1 \pmod p$. This means that when dealing with exponents, we can compute them modulo $p-1$. So, when looking at your verification equation: \begin{eqnarray*} (g^x)^s r^r &=& (g^x)^{x^{-1}(m - kr)} (g^k)^r \cr &=& g^{m - kr} g^{kr} \cr &=& g^m \end{eqnarray*} Soundness means that the algorithm works as intended, not that it is secure. This specific variant of ElGamal has been proposed in 1990 by Agnew, Mullin and Vanstone (the article is called "Improved Digital Signature Scheme based on Discrete Exponentiation"; I could not find a freely downloadable version). It has then been studied in a more general framework, called Meta-ElGamal Signature Schemes. As far as I know, it is still believed secure, but uninspiring (thus understudied), because, like most ElGamal variants, it leads to rather large signatures. DSA is preferred. - This answer should be included in the FAQ on how to make an anwser!Bravo.. +1! ;) –  Matteo Jun 20 '12 at 9:45
The critical flow ratios for a three-phase signal are found to be $0.30, 0.25$ and $0.25$. The total time lost in the cycle is $10 \: s$. Pedestrian crossings at this junction are not significant. The respective Green times (expressed in seconds and rounded off to the nearest integer) for the three phases are 1. $34, 28$, and $28$ 2. $40, 25$, and $25$ 3. $40, 30$, and $30$ 4. $50, 25$, and $25$
# MultiBarrierFunctionMaterial Double well phase transformation barrier free energy contribution. The material provides a function that is parameterized by all phase order parameters . It calculates a phase transformation barrier energy that contains double well contributions for each single order parameter. Currently the g_order only supports the SIMPLE setting and the function is defined as (1) ## Input Parameters • etaseta_i order parameters, one for each h C++ Type:std::vector Options: Description:eta_i order parameters, one for each h ### Required Parameters • computeTrueWhen false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. Default:True C++ Type:bool Options: Description:When false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies. • g_orderSIMPLEPolynomial order of the switching function h(eta) Default:SIMPLE C++ Type:MooseEnum Options:SIMPLE Description:Polynomial order of the switching function h(eta) • well_onlyFalseMake the g zero in [0:1] so it only contributes to enforcing the eta range and not to the phase transformation berrier. Default:False C++ Type:bool Options: Description:Make the g zero in [0:1] so it only contributes to enforcing the eta range and not to the phase transformation berrier. • boundaryThe list of boundary IDs from the mesh where this boundary condition applies C++ Type:std::vector Options: Description:The list of boundary IDs from the mesh where this boundary condition applies • blockThe list of block ids (SubdomainID) that this object will be applied C++ Type:std::vector Options: Description:The list of block ids (SubdomainID) that this object will be applied • function_namegactual name for g(eta_i) Default:g C++ Type:std::string Options: Description:actual name for g(eta_i) ### Optional Parameters • enableTrueSet the enabled status of the MooseObject. Default:True C++ Type:bool Options: Description:Set the enabled status of the MooseObject. • use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. Default:False C++ Type:bool Options: Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used. • control_tagsAdds user-defined labels for accessing object parameters via control logic. C++ Type:std::vector Options: Description:Adds user-defined labels for accessing object parameters via control logic. • seed0The seed for the master random number generator Default:0 C++ Type:unsigned int Options: Description:The seed for the master random number generator • implicitTrueDetermines whether this object is calculated using an implicit or explicit form Default:True C++ Type:bool Options: Description:Determines whether this object is calculated using an implicit or explicit form • constant_onNONEWhen ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped Default:NONE C++ Type:MooseEnum Options:NONE ELEMENT SUBDOMAIN Description:When ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped • output_propertiesList of material properties, from this material, to output (outputs must also be defined to an output type) C++ Type:std::vector Options: Description:List of material properties, from this material, to output (outputs must also be defined to an output type) • outputsnone Vector of output names were you would like to restrict the output of variables(s) associated with this object Default:none C++ Type:std::vector Options: Description:Vector of output names were you would like to restrict the output of variables(s) associated with this object
JEDNOSTKA NAUKOWA KATEGORII A+ # Wydawnictwa / Czasopisma IMPAN / Colloquium Mathematicum / Wszystkie zeszyty ## A bifurcation theory for some nonlinear elliptic equations ### Tom 95 / 2003 Colloquium Mathematicum 95 (2003), 139-151 MSC: 35J20, 35B32. DOI: 10.4064/cm95-1-12 #### Streszczenie We deal with the problem $$\cases {-{\mit\Delta} u= f(x,u)+\lambda g(x,u) & in {\mit\Omega},\cr u_{|\partial {\mit\Omega}}=0,\cr} \tag*{({\rm P}_{\lambda}) }$$ where ${\mit\Omega}\subset {\mathbb R}^n$ is a bounded domain, $\lambda\in {\mathbb R}$, and $f, g:{\mit\Omega}\times {\mathbb R}\to {\mathbb R}$ are two Carathéodory functions with $f(x,0)=g(x,0)=0$. Under suitable assumptions, we prove that there exists $\lambda^{*}>0$ such that, for each $\lambda\in( 0,\lambda^{*})$, problem $( {\rm P}_{\lambda} )$ admits a non-zero, non-negative strong solution $u_{\lambda}\in \bigcap_{p\geq 2}W^{2,p}({\mit\Omega})$ such that $\lim_{\lambda\to 0^+} \|u_{\lambda}\|_{W^{2,p}({\mit\Omega})}=0$ for all $p\geq 2$. Moreover, the function $\lambda\mapsto I_{\lambda}(u_{\lambda})$ is negative and decreasing in $]0,\lambda^{*}[$, where $I_{\lambda}$ is the energy functional related to $({\rm P}_{\lambda})$. #### Autorzy • Biagio RicceriDepartment of Mathematics University of Catania Viale A. Doria 6 95125 Catania, Italy e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
## View abstract ### Session S13 - Harmonic Analysis, Fractal Geometry, and Applications Friday, July 16, 13:45 ~ 14:15 UTC-3 ## Frames generated by the action of a discrete group ### Victoria Paternostro In this talk we shall discuss the structure of subspaces of a Hilbert space that are invariant under unitary representations of discrete groups. We will study in depth the reproducing properties (this is, being a Riesz basis or a frame) of countable families of orbits. In particular we shall see that every separable Hilbert space $\mathcal{H}$ for which there exists a dual integrable representation $\Pi$ of a discrete group $\Gamma$ on $\mathcal{H}$, admits a Parseval frame of the form $\{\Pi(\gamma)\phi:\,\gamma\in\Gamma, \phi\in\Phi\}$ where $\Phi\subseteq \mathcal{H}$ is an at most countable set. Our results extend those that already exist in the euclidean case to this more general context.
1 JEE Main 2020 (Online) 3rd September Morning Slot +4 -1 A uniform thin rope of length 12 m and mass 6 kg hangs vertically from a rigid support and a block of mass 2 kg is attached to its free end. A transverse short wavetrain of wavelength 6 cm is produced at the lower end of the rope. What is the wavelength of the wavetrain (in cm) when it reaches the top of the rope ? A 12 B 3 C 9 D 6 2 JEE Main 2020 (Online) 2nd September Morning Slot +4 -1 An amplitude modulated wave is represented by the expression vm = 5(1 + 0.6 cos 6280t) sin(211 × 104 t) volts The minimum and maximum amplitudes of the amplitude modulated wave are, respectively A $${3 \over 2}$$ V, 5 V B $${5 \over 2}$$ V, 8 V C 5 V, 8 V D 3 V, 5 V 3 JEE Main 2020 (Online) 9th January Morning Slot +4 -1 Three harmonic waves having equal frequency $$\nu$$ and same intensity $${I_0}$$, have phase angles 0, $${\pi \over 4}$$ and $$- {\pi \over 4}$$ respectively. When they are superimposed the intensity of the resultant wave is close to : A 5.8 I0 B 3 I0 C 0.2 I0 D I0 4 JEE Main 2020 (Online) 8th January Evening Slot +4 -1 A transverse wave travels on a taut steel wire with a velocity of v when tension in it is 2.06 × 104 N. When the tension is changed to T, the velocity changed to v/2. The value of T is close to : A 30.5 × 104 N B 2.50 × 104 N C 10.2 × 102 N D 5.15 × 103 N JEE Main Subjects Physics Mechanics Electricity Optics Modern Physics Chemistry Physical Chemistry Inorganic Chemistry Organic Chemistry Mathematics Algebra Trigonometry Coordinate Geometry Calculus EXAM MAP Joint Entrance Examination
# pbi2 molar mass The molar mass of Pb(NO 3) 2 is 331.2098 g/mol. The formula weight is simply the weight in atomic mass units of all the atoms in a given formula. An atom of uranium-235 has a mass 235.043924 amu (atomic mass units). Calculate the value of K s under these conditions. Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa. find moles, using molar mass: 0.64 grams/L of PbI2 @ 461.02 g/mol = 1.388 e-3 moles/L PbI2. This means that you answer should have only 1 significant figure. To calculate molar mass of a chemical compound enter its formula and click 'Compute'. Calculations: Formula: 2KNO3 Molar Mass: 103.102 g/mol 1g=9.69913289751896E-03 mol Percent composition (by mass): Element Count Atom Mass %(by mass) One faraday of electrical charge (F) is one mole of electrons (i.e. To complete this calculation, you have to know what substance you are trying to convert. When calculating molecular weight of a chemical compound, it tells us how many grams are in one mole of that substance. Give the empirical formula of each of the following compounds if a sample contains the following quantities of each element.? Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa. Mass of 3.01*10^-4 mol PbI2 = (3.01*10^-4)*461 = 0.138g PbI2 theoretical yield. So rubbing two sticks together to make fire... even breadsticks? chemistry--help. Name: Lead (II) Iodide. pbbr2 molar mass, ∴ moles of Ni = mass/atomic wt = 0.587/58.71 = 1.00 x 10-2 mol and moles of electrons required = 2 x 1.00 x 10 -2 = 2.00 x 10 -2 mol. 2Al+6HCl 2AlCl3+3H22Al+6HCl 2AlClX3+3HX2 Answer to: What is the molar mass of PbI2? sodium sulfate and barium nitrate (ACSCH065) derive equilibrium expressions for saturated solutions in terms of Ksp and calculate the solubility of an ionic substance from its Ksp value . ›› PbI2 molecular weight. Convert grams PbI2 to moles or moles PbI2 to grams. Does the temperature you boil water in a kettle in affect taste? picture. This compound is also known as Lead(II) Iodide. Solubility Product of PbI2. The mass of boron-10 is 10.01294 amu and the mass … 2 KI + Pb(NO3)2 → PbI2 + 2 KNO3 (8.69 g KI) / (166.0 g KI/mol) x (1 mol PbI2 / 2 mol KI) x (416.0 g PbI2/mol) = 10.9 g PbI2 [Note that the calculation above uses the given molar mass for PbI2. Wife of drug kingpin El Chapo arrested in Virginia, Pat Sajak called out for mocking contestant, Woman’s license mistakenly features her in a face mask, Top volleyball duo boycott country over bikini ban, 'Bachelor' hopeful suffers horrifying skydiving accident, Raiders player arrested in Texas street-racing incident, Jobless workers may face a surprise tax bill, Actress confirms engagement to NFL star Aaron Rodgers, Congressman puts right-wing extremists on notice, LPGA star shares wrenching story of abuse as a child, Shearer will no longer voice Black 'Simpsons' character. We use the most common isotopes. Answer to What mass of PbI2 (molar mass=416.0 g/mol) can be formed by adding 8.69g of KI to a solution of excess Pb(NO3) 2? Molar Mass: The Mole is a unit used to count the number of constituent particles (atoms, molecules, subatomic particles) in an object. Why is (H2O2) known as hydrogen peroxide and not hydrogen dioxide? What is more dangerous, biohazard or radioactivity? 1.388 e-3 moles/L PbI2 releases an equal number of moles of Pb+2 = 1.388 e-3 moles/L of Pb+2 How many moles of sugar was added to 82.90 g of ethanol to change the freezing point of ethanol by 2.750 c. The Ksp for PbI2(s) is 1.4 x 10-8. Using scientific notation, state the mass in amu of ten million such atoms. 6.02 x 10 23 electrons - note that only the Avogadro number definition can apply to moles of electrons). By signing up, you'll get thousands of step-by-step solutions to your homework questions. This is not the same as molecular mass, which is the mass of a single molecule of well-defined isotopes. Molecular weight calculation: 207.2 + 126.90447*2 ›› Percent composition by element PbI2 Molar Mass, Molecular Weight and Elemental Composition Calculator Enter a chemical formula to calculate its molar mass and elemental composition: Molar mass of PbI2 is 461.0089 g/mol molar mass of PbI2 The compound currently has a few specialized applications, such as the manufacture of solar cells and X-ray and gamma-ray detectors. Note : In many cases in you question you have quoted values for mass , volume etc to only 1 significant figure. Its molar mass is 152.14 g/mol. 0.064 g/100 mL has 0.64 g/1 Litre. What are the treatments of pleural effusion? My Work: SInce mass = number of mols x Molar mass, I first look for the molar mass of Ag2CO3, which is 275.75 g/mol. Capitalize the first letter in chemical symbol and use lower case for the remaining letters: Ca, Fe, Mg, Mn, S, O, H, C, N, Na, K, Cl, Al. Lead(II) iodide or lead iodide is a salt with the formula PbI 2.At room temperature, it is a bright yellow odorless crystalline solid, that becomes orange and red when heated. Find the molar mass of Pb(NO3)2 . Have a look at Pbi2 imagesor also Pbi2 Molar Mass [2021] and Pbi2 Compound Name [2021]. moles of solute in 100 mL; S = 0.0016 g / 78.1 g/mol = $$2.05 \times 10^{-5}$$ mol Molar mass Pb(NO3)2 = 331 g/mol. Check the chart for more details. The mass occupied by … by Raymond Esterly. What is the moles? Molar to Mass Concentration Converter » Molar Mass Calculator » Cations, Anions List » Dilution Calculator » Molarity Calculator » Compound Prefixes » Water Insoluble Compounds » Compound Quiz » Concentration Solution Unit Converter » English Word Search Using the chemical formula of the compound and the periodic table of elements, we can add up the atomic weights and calculate molecular weight of the substance. determine the solubility constant ksp for lead(ii) iodide, potassium iodide and lead nitrate. Lead (II) Iodide. Finding molar mass starts with units of grams per mole (g/mol). PBI2 of a chemical compound, More information on • CaI2 + Pb(NO3)2 = Ca(NO3)2 + PbI2 ↓ • 2 NH4I + Pb(NO3)2 = PbI2 ↓ + 2 NH4NO3 In the developing world it is also used to treat skin sporotrichosis and phycomycosis. common chemical compounds. But in fact the molar mass of PbI2 is not 416 g/mol, but 461 g/mol. Finally, use the molar mass of PbI2 to find the mass of PbI2. So again what is the concentration m, Molality? Did you mean to find the molecular weight of one of these similar formulas? For bulk stoichiometric calculations, we are usually determining molar mass, which may also be called standard atomic weight or average atomic mass. Get your answers by asking now. What's something you just don't understand? The nitrate ion is a radical with an overall charge. A common request on this site is to convert grams to moles. Join Yahoo Answers and get 100 points today. If 4.61 grams of Pbl2 was isolated from the reaction of Pb(NO3)2 and Nal, what will be the calculated amount of Nal that reacted with Pbl2? The solubility of CaF 2 (molar mass 78.1) at 18°C is reported to be 1.6 mg per 100 mL of water. Solution. Lead(II) iodide - Wikipedia. What a the molar mass(g/mol)? Pb(NO3)2 = 207 + 2*(14 + 3*16) = 207 + 2*62 = 207 + 124 = 331 grams / mol If the correct molar mass had been used, 12.1 grams of PbI2 would have been produced.] Chemistry. Calculate the molecular weight Do a quick conversion: 1 moles PbI2 = 461.00894 gram using the molecular weight calculator and the molar mass of PbI2. molar mass and molecular weight. Molar Mass: 461.0089. This site explains how to find molar mass. But in fact the molar mass of PbI2 is not 416 g/mol, but 461 g/mol. predict the formation of a precipitate given the standard reference values for Ksp 207.2 + 126.90447*2, Note that all formulas are case-sensitive. In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all of these products together. Molar mass PbI2 = 461g/mol . Molar mass of PbI2 = 461.00894 g/mol. Calculate the solubility of PbI 2(s) in 0.075 M Nal. (8.69 g KI) / (166.0 g KI/mol) x (1 mol PbI2 / 2 mol KI) x (416.0 g PbI2/mol) =, [Note that the calculation above uses the given molar mass for PbI2. This compound is also known as Lead(II) Iodide. If the formula used in calculating molar mass is the molecular formula, the formula weight computed is the molecular weight. mass of PbI2=moles of PbI2×416.0 g PbI21 mol PbI2mass of PbIX2=moles of PbIX2×416.0 g PbIX21 mol PbIX2. Density is defined as the mass per unit volume. Formula weights are especially useful in determining the relative weights of reagents and products in a chemical reaction. When Water has a Kf factor of +1.86 degree Celsius/m And unknown 1 has a Hoff factor of 2 since Unknown 1 is either NaCl or KI. Consider the balanced equation for the reaction of aluminum with hydrochloric acid. Berekent het moleculair gewicht (moleculaire massa) Browse the list of It was formerly called plumbous iodide.. Still have questions? The percentage by weight of any atom or group of atoms in a compound can be computed by dividing the total weight of the atom (or group of atoms) in the formula by the formula weight and multiplying by 100. picture. Convert grams PbI2 to moles  or  moles PbI2 to grams, Molecular weight calculation: picture. Calculations: Formula: Pb(No3)2 Molar Mass: 1761.2 g/mol 1g=5.67794685441744E-04 mol Percent composition (by mass): Element Count Atom Mass %(by mass) It will calculate the total mass along with the elemental composition and mass … Pb(NO3)2 + 2KI ==> PbI2 + 2KNO3. What is the concentration m, (mol solute/kg solvent), when unknown 1 has a normalized freezing point of -1.6 degrees Celsius? If the correct molar mass had been used, 12.1 grams of PbI2 would have been produced.]. Formula: PbI2. ALWAYS balance the reaction first. The reason is that the molar mass of the substance affects the conversion. PBi2. picture. The reason is that the molar mass of the substance affects the conversion. In een chemische formule kunt u gebruiken: Elk chemisch element. go. These relative weights computed from the chemical equation are sometimes called equation weights. From the periodic table, the molar mass is. This is how to calculate molar mass (average molecular weight), which is based on isotropically weighted averages. Alias: Plumbous Iodide. Boron has only two naturally occurring isotopes. determine the empirical and molecular formula. The atomic weights used on this site come from NIST, the National Institute of Standards and Technology. The percentage by weight of any atom or group of atoms in a … a compound contains 63.15% Carbon, 5.30% hydrogen, 31.55% oxygen. Potassium Iodide and Lead nitrate Iodide, potassium Iodide and Lead nitrate starts with units of all the in... Applications, such as the mass per unit volume convert grams to moles or moles PbI2 to moles moles. Notation, state the mass occupied by … this compound is also known as Lead ( II ) Iodide atomic. Cells and X-ray and gamma-ray detectors correct molar mass of PbI2 @ 461.02 g/mol = 1.388 moles/L. Question you have quoted values for mass, volume etc to only 1 figure... Computed from the periodic table, the formula weight is simply the in... From NIST, the molar mass of the substance affects the conversion up! The same as molecular mass, which may also be called standard atomic weight average... Electrons ( i.e. ] atomic mass units of grams per mole ( g/mol.! Grams of PbI2 would have been produced. ] to moles mass and molecular weight of a chemical reaction 3! Density is defined as the mass occupied by … this compound is also known as Lead ( )... Moles/L PbI2 give the empirical formula of each of the substance affects the.... That you answer should have only 1 significant figure many moles of electrons ) is based isotropically! Chemical equation are sometimes called equation weights notation, state the mass in amu of ten million such atoms acid... Formula weight is simply the weight in atomic mass units ) of ten million atoms. And molecular weight of a chemical reaction thousands of step-by-step solutions to your homework.. Following compounds if a sample contains the following quantities of each element. by element Lead ( )! That substance formula of each element. finding molar mass ( average molecular of. In a chemical compound, More information on molar mass Pb ( NO3 ) 2 = 331.! Notation, state the mass occupied by … this compound is also known as Lead II. Mass 235.043924 amu ( atomic mass units ), use the molar,! The value of K s under these conditions of Standards and Technology apply to moles of ). Pbi2 would have been produced. ] such atoms, you have to know what you... The solubility constant Ksp for Lead ( II ) Iodide, potassium Iodide and Lead nitrate mass unit! Constant Ksp for PbI2 ( s ) is one mole of that substance such as the manufacture of solar and! Mass in amu of ten million such atoms are in one mole electrons. Determining the relative weights computed from the periodic table, the National of... Are trying to convert the correct molar mass of PbI2 is not 416 g/mol, but 461 g/mol are useful. A single molecule of well-defined isotopes signing up, you 'll get thousands of step-by-step solutions to homework... ) * 461 = 0.138g PbI2 theoretical yield answer should have only 1 significant figure is the molecular of! And not hydrogen dioxide cases in you question you have to know what substance are! Sticks together to make fire... even breadsticks, Molality is one mole of that substance a chemical,. Ethanol by 2.750 c * 461 = 0.138g PbI2 theoretical yield mean to find the molecular weight calculation 207.2... Well-Defined isotopes mass of a single molecule of well-defined isotopes ) Iodide called equation weights and products a. Of all the atoms in a chemical compound, More information on molar mass of chemical... Chemische formule pbi2 molar mass u gebruiken: Elk chemisch element. theoretical yield not the same as molecular,. Did you mean to find the molecular weight mass Pb ( NO 3 ) 2 is 331.2098 g/mol 0.075 Nal. This site is to convert molecular mass, which is the molar mass, volume etc only. H2O2 ) known as Lead ( II ) Iodide are trying to convert that only Avogadro! Weights are especially useful in determining the relative weights of reagents and products in a chemical.!, using molar mass of Pb ( NO3 ) 2 is 331.2098 g/mol means that answer! How many moles of sugar was added to 82.90 g of ethanol to the. Defined as the mass per unit volume element. products in a given formula your homework questions @ 461.02 =... Constant Ksp for PbI2 ( s ) is 1.4 x 10-8 % Carbon, 5.30 %,... Is how to calculate molar mass starts with units of grams per mole ( g/mol.... Electrons - note that only the Avogadro number definition can apply to moles of sugar added... Are especially useful in determining the relative weights of reagents and products in a chemical reaction calculating molecular weight radical. Kunt u gebruiken: Elk chemisch element. determine pbi2 molar mass solubility of 2!, you 'll get thousands of step-by-step solutions to your homework questions average molecular weight of a compound! Lead nitrate reagents and products in a kettle in affect taste More information on mass... How many moles of sugar was added to 82.90 g of ethanol to change freezing. Mass of Pb ( NO 3 ) 2 = 331 g/mol of PbI2×416.0 g PbI21 mol PbI2mass of PbIX2=moles PbIX2×416.0!
# Implementation To implement the two discretized equations obtained in the discretization paragraph, we will first program the resolution of the first equation and then of the second one. Finally, we will implement the two equations together, and solve the system. ## 1. First equation Remember that for the first equation we have : $$$\int_{\Omega} \frac{\Delta t}{\mu} \, (\nabla \times \phi) \cdot (\nabla \times A^n) - \int_{\Gamma_D} \frac{1}{\mu} A_D \cdot (\nabla \times A^n) + \int_{\Omega_C} \sigma \phi \cdot (A^n + \Delta t \nabla V) = \int_{\Omega_C} \sigma \phi \cdot A^{n-1}$$$ To solve the first equation only, we will assume that V is known. By slightly transforming the first equation, we obtain : $$$\int_{\Omega} \frac{\Delta t}{\mu} \, (\nabla \times \phi) \cdot (\nabla \times A^n) - \int_{\Gamma_D} \frac{1}{\mu} A_D \cdot (\nabla \times A^n) + \int_{\Omega_C} \sigma \phi \cdot A^n = \int_{\Omega_C} \sigma \phi \cdot (A^{n-1} - \Delta t \nabla V)$$$ We will now implement this equation under feelpp. First, we define the $\Omega$, the domain, as well as the $\Omega_C$, the sub-domain of the conductor. ``````auto mesh = loadMesh(_mesh = new Mesh<Simplex<3>>); auto cond_mesh = createSubmesh(mesh, markedelements(mesh, "Omega_C"));`````` Remember that $A \in H_{A_D}^{curl}$, but here, to simplify, we will consider that $A \in H^{1}$, so we have to take a correct function space for $A^n$ : ``auto Ah = Pchv<1>(mesh);`` We now define all of our elements : ``````auto A = Ah->element(A0); auto phi = Ah->element(); auto Aexact = Ah->element();`````` Where $Aexact$ is the exact solution of a given problem, and where A is initialized with $A0$. Then we create the first and second member of the weak formulation, the exporter and the time variable. ``````auto l1 = form1(_test = Ah); auto a1 = form2(_trial = Ah, _test = Ah); double t = 0; auto e = exporter(_mesh = mesh);`````` We define result at time 0, the $H^1$ and $L^2$ errors. ``````Vexact_g.setParameterValues({{"t", t}}); Vexact = project(_space = Vh, _expr = Vexact_g); e->save(); double L2Vexact = normL2(_range = elements(mesh), _expr = Vexact_g); double H1error = 0; double L2error = 0;`````` We then begin the time loop, were we write the first and second member at time t : ``````Aexact_g.setParameterValues({{"t", t}}); Aexact = project(_space = Ah, _expr = Aexact_g); gO.setParameterValues({{"t", t}}); gI.setParameterValues({{"t", t}}); l1.zero(); a1.zero(); l1 = integrate(_range = elements(cond_mesh), _expr = sigma * inner(id(phi), idv(A) - dt * gradV)); a1 = integrate(_range = elements(mesh), _expr = (dt / mu) * inner(curl(phi), curlt(A))); a1 += integrate(_range = elements(cond_mesh), _expr = sigma * inner(id(phi), idt(A))); a1 += on(_range = markedfaces(mesh, "Gamma_I"), _rhs = l1, _element = phi, _expr = gI); a1 += on(_range = markedfaces(mesh, "Gamma_O"), _rhs = l1, _element = phi, _expr = gO); a1 += on(_range = markedfaces(mesh, "Gamma_C"), _rhs = l1, _element = phi, Finally, we finish by solving the equation at time t, exporting the result, and error calculations. ``````a1.solve(_rhs = l1, _solution = A); e->save(); L2Aexact = normL2(_range = elements(mesh), _expr = Aexact_g); L2error = normL2(elements(mesh), (idv(A) - idv(Aexact))); And we start the loop again, which end when $t \geq tmax$. Complete code is available in the code section. Now, we will test this program using the function $V = (-xz,0,-\frac{t}{\sigma})$ with $\sigma = 58000$ and $\mu = 1$. Note that the exact solution $A$ for this $V$ is $A = (xzt,0,0)$. We will also take the time $t \in [0,1]$, with a step time $dt=0.025$. We’ll run the simulation on a bar, representing the conductor. This is the config file of the simulation : ``````directory=hifimagnet/mqs/test1 [gmsh] hsize=0.1 filename=$cfgdir/test1.geo [functions] a={0,0,0} i={x*z*t,0,0}:x:y:z:t o={x*z*t,0,0}:x:y:z:t d={x*z*t,0,0}:x:y:z:t v={-x*z,0,-t/58000}:x:y:z:t m=1 s=58000 e={x*z*t,0,0}:x:y:z:t [ts] time-step = 0.025 time-final = 1`````` Below are the errors we get at different times, with the associated graph in log scale. $t=0.1$: h 0.2 0.1 O.05 $L^2$ error 0.000198386 5.29439e-05 1.40537e-05 $L^2$ relative error 0.000532326 0.000142063 3.77101e-05 $H^1$ error 0.00318232 0.00178333 0.000950294 $t=0.5$: h 0.2 0.1 O.05 $L^2$ error 0.000992004 0.000264656 7.02804e-05 $L^2$ relative error 0.000532365 0.000142029 3.77164e-05 $H^1$ error 0.0159153 0.00891686 0.00476399 $t=0.9$: h 0.2 0.1 O.05 $L^2$ error 0.00178574 0.000476291 0.000126614 $L^2$ relative error 0.000532405 0.000142003 3.77491e-05 $H^1$ error 0.0286545 0.0160523 0.00861033 Here is a comparison under paraview between the exact solution and the calculated solution, at different time : We conclude that at every time, the $L^2$ error slope is close to 2, which is what we expect to have, and the $H^1$ error slope is also close to 1. The difference can be explain by the fact we took only 3 different hsize, and the result could be better with lower hsize (but the running time can become very long). ## 2. Second equation Our second equation is : $$$\int_{\Omega_C} \sigma (A^n + \Delta t\nabla V) \cdot \nabla \psi = \int_{\Omega_C} \sigma A^{n-1} \cdot \nabla \psi$$$ To solve the second equation only, we will assume that $\frac{\partial A}{\partial t}$ is known. By slightly transforming this equation, we obtain : $$$\int_{\Omega_C} \sigma \nabla V \cdot \nabla \psi = - \int_{\Omega_C} \frac{\partial A}{\partial t} \cdot \nabla \psi$$$ The code is almost the same as before, with a few modifications : First, we change our function space : ``auto Vh = Pch<1>( cond_mesh );`` and we define our elements : ``````auto V = Vh->element(V0); auto psi = Vh->element(); auto Vexact = Vh->element();`````` Then we define our two forms : ``````auto a2 = form2( _trial=Vh, _test=Vh); auto l2 = form1( _test=Vh ); auto e = exporter( _mesh=mesh );`````` The last change is inside the loop, were we define the second equation : ``````l2 = integrate(_range=elements(cond_mesh),_expr = sigma * inner( -dA, trans(grad(psi)) )); a2 = integrate(_range=elements(cond_mesh),_expr = sigma * inner(gradt(V), grad(psi) )); a2 += on(_range=markedfaces(cond_mesh,"Gamma_I"), _rhs=l2, _element=psi, _expr= gI ); a2 += on(_range=markedfaces(cond_mesh,"Gamma_O"), _rhs=l2, _element=psi, _expr= gO ); a2 += on(_range=markedfaces(cond_mesh,"Gamma_C"), _rhs=l2, _element=psi, _expr= Ad );`````` Then we solve, and we compute the errors. Complete code is available in the code section. Now we will test this program with two different set of function : First will be with the function $A = (-t,0,0)$, so $\frac{\partial A}{\partial t} = (-1,0,0)$. Note that the exact solution is $V = zt$. We will run the simulation on the same geometry as before, with same time and step time. This are the errors we get with hsize = 0.1 : t 0.1 0.5 O.9 $L^2$ error 1.40624e-15 3.01653e-15 7.75481e-15 $L^2$ relative error 2.17853e-15 9.34637e-16 1.33486e-15 $H^1$ error 4.47471e-15 2.05823e-14 3.85823e-14 The error is 0 at epsilon machine, which is what is expected because the function is linear in space. This is the config file for this simulation : ``````directory=hifimagnet/mqs/test2 [gmsh] hsize=0.1 filename=$cfgdir/test2.geo [functions] v=0 a={-t,0,0}:x:y:z:t i=z*t:x:y:z:t o=z*t:x:y:z:t d=z*t:x:y:z:t s=58000 e=z*t:x:y:z:t [ts] time-step = 0.025 time-final = 1`````` Now we will use the function $A = (-xt,0,zt)$, so $\frac{\partial A}{\partial t} = (-x,0,z)$. Note that the exact solution is $V = zxt$. We will run the simulation on the same geometry as before, with same time and step time. Below are the errors we get at different times, with the associated graph in log scale. $t=0.1$: h 0.2 0.1 O.05 $L^2$ error 0.000318572 8.66208e-05 2.23948e-05 $L^2$ relative error 0.00085482 0.000232428 6.00916e-05 $H^1$ error 0.00537588 0.00303521 0.00161933 $t=0.5$: h 0.2 0.1 O.05 $L^2$ error 0.00159286 0.000433104 0.000111974 $L^2$ relative error 0.00085482 0.000232428 6.00916e-05 $H^1$ error 0.0268794 0.0151761 0.00809665 $t=0.9$ h 0.2 0.1 O.05 $L^2$ error 0.00286715 0.000779587 0.000201553 $L^2$ relative error 0.00085482 0.000232428 6.00916e-05 $H^1$ error 0.0483829 0.0273169 0.014574 Here is a comparison under paraview between the exact solution and the calculated solution, at different time: This is the config file of the simulation : ``````directory=hifimagnet/mqs/test22 [gmsh] hsize=0.1 filename=$cfgdir/test2.geo [functions] v=0 a={-x*t,0,z*t}:x:y:z:t i=x*z*t:x:y:z:t o=x*z*t:x:y:z:t d=x*z*t:x:y:z:t s=58000 e=x*z*t:x:y:z:t [ts] time-step = 0.025 time-final = 1`````` We can conclude that the $L^2$ and $H^1$ errors are what expected, for the same reason as first equation. ## 3. Coupled system Now we take back our system : $$$\int_{\Omega} \frac{\Delta t}{\mu} \, (\nabla \times \phi) \cdot (\nabla \times A^n) - \int_{\Gamma_D} \frac{1}{\mu} A_D \cdot (\nabla \times A^n) + \int_{\Omega_C} \sigma \phi \cdot (A^n + \Delta t \nabla V) = \int_{\Omega_C} \sigma \phi \cdot A^{n-1}$$$ $$$\int_{\Omega_C} \sigma (A^n + \Delta t\nabla V) \cdot \nabla \psi = \int_{\Omega_C} \sigma A^{n-1} \cdot \nabla \psi$$$ Which can be rewrite : $$$\int_{\Omega} \frac{\Delta t}{\mu} \, (\nabla \times \phi) \cdot (\nabla \times A^n) + \int_{\Omega_C} \sigma \phi \cdot A^n - \int_{\Gamma_D} \frac{1}{\mu} A_D \cdot (\nabla \times A^n) + \int_{\Omega_C} \sigma \phi \cdot \Delta t \nabla V = \int_{\Omega_C} \sigma \phi \cdot A^{n-1}$$$ $$$\int_{\Omega_C} \sigma A^n \cdot \nabla \psi + \int_{\Omega_C} \sigma \Delta t\nabla V \cdot \nabla \psi = \int_{\Omega_C} \sigma A^{n-1} \cdot \nabla \psi$$$ To implement it under feelpp, we have to use blockform and product space, because $(A,V) \in H^1(\Omega) \times H^1(\Omega_C)$. So first, after we define the mesh as same way as before, we create our elements : ``````auto Ah = Pchv<1>( mesh ); auto Vh = Pch<1>( cond_mesh ); auto A = Ah->element(A0); auto V = Vh->element(V0); auto Aexact = Ah->element(); auto Vexact = Vh->element(); auto phi = Ah->element(); auto psi = Vh->element();`````` Then we define the produt space, and create our element on this product space : ``````auto Zh = product(Ah,Vh); auto U = Zh.element();`````` We have to create the blockforms for the right and left side of our system : ``````auto rhs = blockform1( Zh ); auto lhs = blockform2( Zh );`````` Then we create the exporter, and export the solution for A and V at time $t=0$. ``````double t = 0; auto e = exporter( _mesh=mesh ); Aexact_g.setParameterValues({{"t", t}}); Aexact = project(_space = Ah, _expr = Aexact_g); Vexact_g.setParameterValues({{"t", t}}); Vexact = project(_space = Vh, _expr = Vexact_g); e->step(t)->add("A", A0); e->step(t)->add("Aexact", Aexact); e->step(t)->add("V", V0); e->step(t)->add("Vexact", Vexact); e->save(); double L2Aexact = normL2(_range = elements(mesh), _expr = Aexact_g); double H1Aerror = 0; double L2Aerror = 0; double L2Vexact = normL2(_range = elements(mesh), _expr = Vexact_g); double H1Verror = 0; double L2Verror = 0;`````` Now we begin the temporal loop, so we have to set our variables at the correct time : ``````Aexact_g.setParameterValues({{"t", t}}); Aexact = project(_space = Ah, _expr = Aexact_g); Vexact_g.setParameterValues({{"t", t}}); Vexact = project(_space = Vh, _expr = Vexact_g); v0.setParameterValues({{"t", t}}); v1.setParameterValues({{"t", t}}); Ad.setParameterValues({{"t", t}});`````` And we can write both equation inside the blockforms: ``````lhs.zero(); rhs.zero(); // Ampere law: sigma dA/dt + rot(1/(mu-r*mu_0) rotA) + sigma grad(V) = Js lhs(0_c, 0_c) = integrate( _range=elements(mesh),_expr = dt * inner(curl(phi) , curlt(A)) ); lhs(0_c, 0_c) += integrate( _range=elements(cond_mesh),_expr = mur * mu0 * sigma * inner(id(phi) , idt(A) )); lhs(0_c, 1_c) = integrate(_range=elements(cond_mesh),_expr = dt * mu0 * mur * sigma*inner(id(phi),trans(gradt(V))) ); rhs(0_c) = integrate(_range=elements(cond_mesh),_expr = mu0 * mur * sigma * inner(id(phi) , idv(A))); // Current conservation: div( -sigma grad(V) -sigma*dA/dt) = Qs lhs(1_c, 0_c) = integrate( _range=elements(cond_mesh),_expr = sigma * inner(idt(A), trans(grad(psi))) ); lhs(1_c, 1_c) = integrate( _range=elements(cond_mesh),_expr = sigma * dt * inner(gradt(V), grad(psi)) ); rhs(1_c) = integrate(_range=elements(cond_mesh),_expr = sigma * inner(idv(A), trans(grad(psi))) );`````` The last step before solving is to set the boundary counditions. This is how we did it : ``````lhs(0_c, 0_c) += on(_range=markedfaces(mesh,"V0"), _rhs=rhs(0_c), _element=phi, _expr= Ad); lhs(0_c, 0_c) += on(_range=markedfaces(mesh,"V1"), _rhs=rhs(0_c), _element=phi, _expr= Ad); lhs(0_c, 0_c) += on(_range=markedfaces(mesh,"Gamma_C"), _rhs=rhs(0_c), _element=phi, _expr= Ad); lhs(1_c, 1_c) += on(_range=markedfaces(cond_mesh,"V0"), _rhs=rhs(1_c), _element=psi, _expr= v0); lhs(1_c, 1_c) += on(_range=markedfaces(cond_mesh,"V1"), _rhs=rhs(1_c), _element=psi, _expr= v1); lhs(1_c, 1_c) += on(_range=markedfaces(cond_mesh,"Gamma_C"), _rhs=rhs(1_c), _element=psi, _expr= Vexact_g);`````` And then, we solve and export : ``````lhs.solve(_rhs=rhs,_solution=U); e->step(t)->add( "A", U(0_c)); e->step(t)->add( "V", U(1_c)); e->step(t)->add( "Aexact", Aexact); e->step(t)->add( "Vexact", Vexact); e->save();`````` But, this is probably not the good way to do it. Reason is, when we solve (and export) the system we dont have good results. For exemple, at the time $t=dt$, the solution we obtain is not close to the exact one, and the error bad. The visualization tends to prove that our boundary counditions are not set properly. The boundary should be the same as the exact solution, because we define boundary with the exact solution, but they are differents. So the way we set boundary condition in the code seems to be not correct. The consequence is that we haven’t been able to visualize correct results for a resolution yet. Complete code is available in the code section. Config file use for simulation : ``````directory=hifimagnet/mqs/mqs-blockform/conductor/const A0={0,0,0}:x:y:z V0=z:x:y:z v0=z:x:y:z:t v1=z:x:y:z:t Ad={0,0,-t}:x:y:z:t mu_mag=1 sigma=1 Aexact={0,0,-t}:x:y:z:t Vexact=z:x:y:z:t [gmsh] hsize=0.1 filename=$cfgdir/conductor.geo [ts] time-step=0.025 time-final=1``````
# Please Help What is the solution for $2y''+3y'+5y=7x^2+9x+11$ thanks. Note by Ce Die 1 year, 8 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. • Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ ## Comments Sort by: Top Newest $2y'' + 3y' + 5y = 7x^2 + 9x + 11$ The homogeneous equation is: $2y'' + 3y' + 5y = 0$ The general solution to the homogeneous equation has the form: $y = c \, e^{\alpha x}$ Plugging this back into the H equation yields a complex conjugate solution pair: $2 \alpha^2 \, y + 3 \alpha y + 5y = 0 \\ 2 \alpha^2 + 3 \alpha + 5 = 0 \\ \alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\ \alpha_2 = \frac{-3 + \sqrt{31} i}{4}$ The solution to the H equation is: $y_H = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x}$ Now for the particular solution associated with the the polynomial, it is evident that this must be a second order polynomial: $y_P = A x^2 + B x + C$ Plugging into the differential equation: $2(2A) + 3(2A x + B) + 5(A x^2 + B x + C) = 7x^2 + 9x + 11 \\ 5A x^2 + (6A + 5B)x + 4A + 3B + 5C = 7x^2 + 9x + 11 \\ 5A = 7 \implies A = \frac{7}{5} \\ 6A + 5B = 9 \implies B = \frac{3}{25} \\ 4A + 3B + 5C = 11 \implies C = \frac{126}{125}$ The particular solution is: $y_P = \frac{7}{5} x^2 + \frac{3}{25} x + \frac{126}{125}$ The total solution is: $y = y_H + y_P = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x} + \frac{7}{5} x^2 + \frac{3}{25} x + \frac{126}{125} \\ \alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\ \alpha_2 = \frac{-3 + \sqrt{31} i}{4}$ To solve for constants $c_1$ and $c_2$, solve initial-value equations for the function and its derivative End of first part Suppose instead that the differential equation had been: $2y'' + 3y' + 5y = 7x^3 + 11x^2 + 13x + 9$ The homogeneous solution is identical to the previous problem. The particular solution has the form: $y_P = A x^3 + B x^2 + Cx + D$ Plugging into the differential equation: $2(6 A x + 2B) + 3(3 A x^2 + 2 B x + C) + 5(A x^3 + B x^2 + Cx + D) = 7x^3 + 11x^2 + 13x + 9 \\ 5A = 7 \implies A = \frac{7}{5} \\ 9A + 5B = 11 \implies B = -\frac{8}{25} \\ 12A + 6B + 5C = 13 \implies C = -\frac{47}{125} \\ 4B + 3C + 5D = 9 \implies D = \frac{1426}{625}$ The total solution is therefore: $y = y_H + y_P = c_1 e^{\alpha_1 x} + c_2 e^{\alpha_2 x} + \frac{7}{5} x^3 -\frac{8}{25} x^2 -\frac{47}{125} x + \frac{1426}{625} \\ \alpha_1 = \frac{-3 - \sqrt{31} i}{4} \\ \alpha_2 = \frac{-3 + \sqrt{31} i}{4}$ - 1 year, 8 months ago Log in to reply Thank you and also how to solve this $2y’’+3y’+5y=7x^3+11x^2+13x+9$ ? Thanks - 1 year, 8 months ago Log in to reply The process is the same as for the other problem. The only difference is that the particular solution is a third order polynomial instead of a second order polynomial. - 1 year, 8 months ago Log in to reply Please I give me a solution for this - 1 year, 8 months ago Log in to reply Ok, I will post it a bit later - 1 year, 8 months ago Log in to reply thank you i hope to see it - 1 year, 8 months ago Log in to reply I have added it - 1 year, 8 months ago Log in to reply thank you so much you enlight me! :) - 1 year, 8 months ago Log in to reply You're welcome - 1 year, 8 months ago Log in to reply How about this 2y’’+3y’+5y=7x^4 how to solve this? Thanks - 1 year, 7 months ago Log in to reply Since the general procedure is the same as for the other two, I'll leave this one to you - 1 year, 7 months ago Log in to reply Is this correct if 4th degree is $y'' = 8Ax^2+6Bx+2C$, $y'=4Ax^3+3Bx^2+2Cx+D$ and $y = Ax^4+Bx^3+Cx^2+Dx+E$? I hope its right. Thanks - 1 year, 7 months ago Log in to reply Correct, except that the first term in your $y''$ equation should have a coefficient of $12$ instead of $8$ - 1 year, 7 months ago Log in to reply $y''=12Ax^2+6Bx+2C$ is this correct? - 1 year, 7 months ago Log in to reply Yes, it is - 1 year, 7 months ago Log in to reply for $x^5$ the $y_{p}$ is $y''=15Ax^3+12Bx^2+6Cx+2D$? This this correct Sir? - 1 year, 7 months ago Log in to reply Look again at the coefficient on your $x^3$ term - 1 year, 7 months ago Log in to reply I think its 18? - 1 year, 7 months ago Log in to reply × Problem Loading... Note Loading... Set Loading...
# How do you simplify 1 /( x^3*x^-7)? Mar 1, 2016 ${x}^{4}$. #### Explanation: This can be done in either one or two steps: Using the sum of exponentials; when the same base - $x$ - is multiplied, the exponentials of this base are added. So, (1)/x^(3+(-7), $\frac{1}{x} ^ - 4$. The answer can either be left like this or be further simplified by remembering that when the base is the denominator, the symbol of the exponential can be reversed to make it the numerator. So, $\frac{1}{x} ^ - 4$, ${x}^{-} \left(- 4\right)$, ${x}^{4}$. Hope it Helps! :D .
# ICHEP 2020 28 July 2020 to 6 August 2020 virtual conference Europe/Prague timezone ## Discriminating new physics scenarios in $b\rightarrow s\,\mu^+\,\mu^-$ via transverse polarization asymmetry of $K^*$ in $B\rightarrow K^*\,\mu^+\,\mu^-$ decay 29 Jul 2020, 13:39 3m virtual conference #### virtual conference Poster 05. Quark and Lepton Flavour Physics ### Speaker Suman Kumbhakar (IIT Bombay) ### Description A global fit to current $b\rightarrow s\,l^+\,l^-$ data suggest several new physics solutions. Considering only one operator at a time and new physics in the muon sector, it has been shown that the new physics scenarios (I) $C_9^{\rm NP}<0$, (II) $C_{9}^{\rm NP} = -C_{10}^{\rm NP}$, (III) $C_9^{\rm NP} = -C_9^{\rm 'NP}$ can account for all data in this sector. In order to discriminate between these scenarios one needs to have a handle on additional observables in $b \to s \, \mu^+ \, \mu^-$ sector. In this work we study transverse polarization asymmetry of $K^*$ polarization in $B\rightarrow K^*\,\mu^+\,\mu^-$ decay, $A_T$, to explore such a possibility. We show that $A_T$ is a good discriminant of all the three scenarios. The measurement of this asymmetry with a percent accuracy can confirm which new physics scenario is the true solution, at better than 3$\sigma$ C.L. ### Primary authors Suman Kumbhakar (IIT Bombay) Sankagiri Umasankar (IIT Bombay) Ashutosh Alok (IIT Jodhpur)
# Low-energy couplings of QCD from current correlators near the chiral limit Giusti L, Hernandez P, Laine M, Weisz P, Wittig H (2004) JOURNAL OF HIGH ENERGY PHYSICS 2004(04): 013. No fulltext has been uploaded. References only! Journal Article | Original Article | Published | English Author ; ; ; ; Abstract We investigate a new numerical procedure to compute fermionic correlation functions at very small quark masses. Large statistical fluctuations, due to the presence of local "bumps" in the wave functions associated with the low-lying eigenmodes of the Dirac operator, are reduced by an exact low-mode averaging. To demonstrate the feasibility of the technique, we compute the two-point correlator of the left-handed vector current with Neuberger fermions in the quenched approximation, for lattices with a linear extent of L approximate to 1.5 fm, a lattice spacing a approximate to 0.09 fm, and quark masses down to the epsilon-regime. By matching the results with the corresponding (quenched) chiral perturbation theory expressions, an estimate of (quenched) low-energy constants can be obtained. We find agreement between the quenched values of F extrapolated from the p-regime and extracted in the epsilon-regime. Keywords Publishing Year ISSN eISSN PUB-ID ### Cite this Giusti L, Hernandez P, Laine M, Weisz P, Wittig H. Low-energy couplings of QCD from current correlators near the chiral limit. JOURNAL OF HIGH ENERGY PHYSICS. 2004;2004(04):013. Giusti, L., Hernandez, P., Laine, M., Weisz, P., & Wittig, H. (2004). Low-energy couplings of QCD from current correlators near the chiral limit. JOURNAL OF HIGH ENERGY PHYSICS, 2004(04), 013. doi:10.1088/1126-6708/2004/04/013 Giusti, L., Hernandez, P., Laine, M., Weisz, P., and Wittig, H. (2004). Low-energy couplings of QCD from current correlators near the chiral limit. JOURNAL OF HIGH ENERGY PHYSICS 2004, 013. Giusti, L., et al., 2004. Low-energy couplings of QCD from current correlators near the chiral limit. JOURNAL OF HIGH ENERGY PHYSICS, 2004(04), p 013. L. Giusti, et al., “Low-energy couplings of QCD from current correlators near the chiral limit”, JOURNAL OF HIGH ENERGY PHYSICS, vol. 2004, 2004, pp. 013. Giusti, L., Hernandez, P., Laine, M., Weisz, P., Wittig, H.: Low-energy couplings of QCD from current correlators near the chiral limit. JOURNAL OF HIGH ENERGY PHYSICS. 2004, 013 (2004). Giusti, L, Hernandez, P, Laine, Mikko, Weisz, P, and Wittig, H. “Low-energy couplings of QCD from current correlators near the chiral limit”. JOURNAL OF HIGH ENERGY PHYSICS 2004.04 (2004): 013. This data publication is cited in the following publications: This publication cites the following data publications: ### Export 0 Marked Publications Open Data PUB ### Web of Science View record in Web of Science® ### Sources arXiv hep-lat/0402002 Inspire 643837
Acta mathematica scientia,Series A ›› 2021, Vol. 41 ›› Issue (5): 1270-1282. ### The Two-Dimensional Steady Chaplygin Gas Flows Passing a Straight Wedge Jia Jia() 1. School of Mathematical Sciences, East China Normal University, Shanghai 200241 • Received:2020-07-29 Online:2021-10-26 Published:2021-10-08 Abstract: The purpose of this paper is to investigate the two-dimensional steady supersonic chaplygin gas flows passing a straight wedge. By the definition of Radon measure solution, the accurate expressions are obtained for all cases where the Mach number is greater than 1. It is quite different from the polytropic gas, for the chaplygin gas flows passing problems, there exists a Mach number $M^{\ast}_{0}$, when the Mach number of incoming flows is greater than or equal to $M^{\ast}_{0}$, the quality will be concentrated on the surface of the straight wedge. At this time, there are not piecewise smooth solutions in the Lebesgue sense. The limit analysis is used to prove that the limit obtained by Lebesgue integral is consistent with the solution obtained in the sence of Radon measure solution. CLC Number: • O175.2 Trendmd
# Tag Info 3 In typical metallic wiring, current flows in the form of negatively charged electrons. In semiconductors (such as transistors in electronics or solar panels), current flows as either negative electrons and positive holes depending on type. In plasmas (such as an arch from an electric lighter or fusion plasma in a power plant), current flows as both ... 1 One metaphor to illustrate is the folowing: Imagine a company $C$ which has two departments: A production department; let’s call it’s production rate $I$. A shipping/delivery department; let’s call it’s delivery speed $V$. The average daily rate isn’t affected by the the delivery speed, only the instant delivery rate at a given instant in time is affected. ... 1 It probably would not make sense to do so, since electric current is not simply the motion of a bunch of charge. Instead, electric current is the net motion of electrically charged particles and the movement of electric field disturbances connected to the charges. The objects that are really contributing to electric current are called charge carriers. ... 1 Forget about the details of circuit analysis for a moment and think of what the wire and plug is physically: it's a strip or rope of metal. Perhaps it is in several joined pieces (e.g. the "wire" part versus the "prong" part), and wrapped up with plastic insulation, but still. Hence, when you plug the wire into the power strip, it is no ... Only top voted, non community-wiki answers of a minimum length are eligible
# The Unapologetic Mathematician ## Polynomials Now we come to a really nice example of a semigroup ring. Start with the free commutative monoid on $n$ generators. This is just the product of $n$ copies of the natural numbers: $\mathbb{N}^n$. Now let’s build the semigroup ring $\mathbb{Z}[\mathbb{N}^n]$ on this monoid. First off, an element of the monoid is an ordered $n$-tuple of natural numbers $(e_1,e_2,...,e_n)$. Let’s write it in the following, more suggestive notation: $x_1^{e_1}x_2^{e_2}...x_n^{e_n}$. We multiply such “monomials” just by adding up the corresponding exponents, as we know from the composition rule for the monoid. Now we build the semigroup ring by taking formal linear combinations of these monomials. A generic element looks like $\sum\limits_{(e_1,e_2,...,e_n)\in\mathbb{N}^n}c_{(e_1,e_2,...,e_n)}x_1^{e_1}x_2^{e_2}...x_n^{e_n}$ where the $c_{(e_1,e_2,...,e_n)}$ are integers, and all but finitely many of them are zero. Assuming everyone’s taken high school algebra, we’ve seen these before. They’re just polynomials in $n$ variables with integer coefficients! The addition and multiplication rules are just what we know from high school algebra. The only difference is here we specifically don’t think of $x_i$ as a “placeholder” for a number, but as an actual element of our ring. But we can still use it as a placeholder. Let’s consider any other commutative ring $R$ with unit and pick $n$ elements of $R$. Call them $r_1$, $r_2$, and so on up to $r_n$. Since $R$ is a commutative monoid under multiplication there is a unique homomorphism of monoids from $\mathbb{N}^n$ to $R$ sending $x_i$ to $r_i$. That’s just what it means for $\mathbb{N}^n$ to be a free commutative monoid. Now there’s a unique homomorphism of rings from $\mathbb{Z}[\mathbb{N}^n]$ to $R$ sending $x_i$ to $r_i$, because $\mathbb{Z}[\mathbb{N}^n]$ is the semigroup ring of $\mathbb{N}^n$. The upshot is that $\mathbb{Z}[\mathbb{N}^n]$ is the free commutative ring with unit on $n$ generators. Because of this, we’ll usually omit the intermediate step of constructing $\mathbb{N}^n$ and just write this ring as $\mathbb{Z}[x_1,x_2,...,x_n]$. There are similar constructions to this one that I’ll leave you to ponder on your own. What if we just constructed the free monoid on $n$ generators (not commutative)? What about the free semigroup? What sort of rings do we get, and what universal properties do they satisfy? April 16, 2007 Posted by | Ring theory | 6 Comments
Sign up now to enroll in courses, follow best educators, interact with the community and track your progress. 30 lessons, 5h 19m Enroll 7.7k Download Ratio class - 7(Proportion - 1) 17,089 plays More Proportions basic part - 1 ## Bharat Gupta is teaching live on Unacademy Plus Bharat Gupta Engineer by degree Mathematician by Heart. Having 22 years of teaching experience. More than 20000 students selected. U Unacademy user how ( 12 - √32 ) becoms ( 12 - 4√2 ) Hk Himanshu kolte 5 months ago sahi baat hai ye kaise ho skta hai Astha gupta 3 months ago first of all u have written everything wrong it's (12-√32) n by taking common it becomes 4(3-√2) Snehal Mishra 7 days ago Rewrite 32 as its prime factors. √2×2×2×2×2= Group the same factors into pairs. √(2×2)×(2×2)×2= Rewrite into exponent form. √(22)×(22)×2= √22=2 (2×2)×√2= 4√2 I can't understand this chapter video lecture I m very dissatisfied B/C=(2/4)power 3 =8/64= 1/8, according to me answer is B3:C3 plz clarify sir. Thanks sir sir , progression and proportion same kese maan liya aapne 7,21,45 wale qn mai ?? video time 5:45 1. RATIO AND PROPORTION Class 7 Proportion part-1) 2. Proportion If two ratio are equal, it is known as proportion Ca 3. Proportion If two ratio are equal, it is known as proportion Ca uaw 4. Calculating missing term of proportion Fourth proportion Find fourth proportion of 2, 5, 3 2,S, 3, :S3 7L 5. Calculating missing term of proportion First proportion Find first proportion of 3, 6, 10 6. Calculating missing term of proportion Third proportion Find third proportion of 2 and 5 ru 7. Calculating missing term of proportion Mean proportion Find Mean proportion of 3 and 12 2, 12 3, x, 12. 8. The first, second and fourth terms of a progression are 7, 21 and 45 respectively. Find the third term. 9. The first, second and fourth terms of a progression are 7, 21 and 45 respectively. Find the third term. 21. YS 10. lfx : y 3 : 1, then xa_y x3ty3 will be equal to which of the following? a, 13:14 b. 14 13 C. 10:11 d. 11:10
## nuhcolee Group Title Simplify: (5x^2 + 3x + 4) + (2x^2 − 6x + 3) 7x2 + 9x + 7 7x2 + 3x + 7 7x2 − 3x + 1 7x2 − 3x + 7 2 years ago 2 years ago 1. Champs Group Title Factorize? 2. jim_thompson5910 Group Title 5x^2 + 2x^2 = ??? 3. eliassaab Group Title $\left(2 x^2-6 x+3\right)+\left(5 x^2+3 x+4\right)=7 x^2-3 x+7$ 4. eliassaab Group Title 5. dpflan Group Title I imagine that your goal now is make it into binomial? 6. dpflan Group Title like $(a+b)^2$ 7. eliassaab Group Title Just add them up. Thant is all what you have to do. 8. nuhcolee Group Title thanks !
# Adelic methods for classical modular forms Many conjectures about properties of automorphic forms on $\mathrm{GL}(2)$ can be formulated in the basic language of classical modular forms (i.e. Hecke forms that are holomorphic on $\mathbb{H}$ or nonholomorphic Maass forms). However, sometimes the proofs of these conjectures are best understood (or indeed only understood!) by first translating the problem to the adelic setting of automorphic representations of $\mathrm{GL}_2\left(\mathbb{A}_{\mathbb{Q}}\right)$. An example is a recent paper of Templier disproving a folklore conjecture that $\|f\|_{\infty} \ll_{\varepsilon} N^{\varepsilon}$ for $f \in S_k(N,\chi)$, by (essentially) showing that the local newvector $W_p$ in the Whittaker model of the automorphic representation associated to a newform $f \in S_k^{*}(N,\chi)$ takes large values when $p^2 | N$. My question is: what are other examples of adelic methods being used to solve problems in the classical theory of modular forms? • Not quite adélic but representation-theoretic are some of the papers of Bernstein and Reznikov (e.g., "Analytic continuation of representations and estimates of automorphic forms", Annals of Math. 150 (1999), 329-352). In fact, representation-theoretic insights and adélic insights are often conflated because both were introduced around the same time, but sometimes the first is enough. – Denis Chaperon de Lauzières Dec 2 '13 at 18:18 A standard example of a theory that can be understood without the adelic setting but are better understood with it is the Atkin-Lehner theory of new forms. Atkin and Lehner did prove it without any reference to the adelic settings, using instead heavily the $q$-expansions of modular forms. But the theory is cumbersome to state and to prove in the standard setting, while it becomes much simpler and more natural in the adelic point of view, as was emphasized by Jacquet-Langlands. • Yes, and one should mention W. Casselman's contribution to the modernization of Atkin-Lehner for $GL(2)$... – paul garrett Dec 3 '13 at 3:00 This answer is partly inspired by Joël's, but I think it's worth mentioning that the method of newforms is much coarser then theory of types. Modular Hecke cusp forms for fixed weight but different level often share the "same" L-function. The L-function does depend significantly on the irreducible cupsidal automorphic representation, not so much upon the precise vector given. If you are willing to use the theory of types and the corresponding congruence representation-valued forms, you are able to see where redundant things are happening. E.g. given a supercuspidal representation of $SL_2(F_p)$, you can inflate it to $\tau_p$ of $SL_2(Z_p)$ and $\tau$ of $SL_2(Z)$. If you study the corresponding $\tau$ vector-valued forms, the corresponding local factor of the supercuspidal representations are precisely the supercuspidal given by the induced of $\tau_p$ up to $SL_2(Q_p)$. These also turn up as newvectors somewhere, but they can't be separated from the other stuff happening there anymore. Note that the corresponding $L$-factor is constant.
# This whole Al Gore electricity bill deal My view is that it was irresponsible in part of the media (especially news channels and newspapers) to cling on to this topic so quickly without getting the facts straight first. Even if this whole thing is sorted out, the damage is done (especially for those people who don't believe in, or are on the fence in regards to global warming)... I know it's too much to ask for the media to get all its facts straight before spreading an opinion, but it shouldn't be. the large bills could be due to the fact that he is very active politically. He might have more guests over or more gatherings at his house than the average person. but even if the large spending aren't justified, the media should have acted more responsibly, considering that something like this is very likely to stop the ball that al gore got rolling... the positive effects of his actions far outweigh his electricity bill. “Sometimes when people don’t like the message, in this case that global warming is real, it’s convenient to attack the messenger,” many people have trouble separating messages from messengers, and news channels should be aware of that when they report such news (I'm not going for censorship here, I'm going for tact; for not turning everything into a scandal). here's an article, if you haven't heard of this yet: http://timesnews.typepad.com/news/2007/02/inconvenient_bu.html [Broken] Last edited by a moderator: http://www.dailymail.co.uk/pages/li...le_id=439339&in_page_id=1772&in_author_id=244 The BBC thinks climate change is the biggest threat to mankind. Not a week passes without scary new 'revelations' about the harm being done by carbon emissions, and the inevitable admonitions that we should turn off our lights, not leave our televisions on 'stand-by' and limit our use of cars and aeroplanes. Four weeks ago, the publication of a new report by the International Panel on Climate Change was greeted by the BBC with even more hysteria than usual. The report's terrifying warnings of the effects of global warming were accorded the status of holy writ. Unless mankind quickly changes its ways, we were informed, we are many of us doomed. Alas, the BBC evinces very few signs of reforming itself. New figures obtained under the Freedom of Information Act show that the Corporation spent a stupendous amount on air travel in the year to April 1, 2006. There were 41,355 journeys by air (equating to almost two flights per employee), collectively notching up 125 million miles, giving an average of 3,000 miles per journey. ....cont'd I'm not quiet familiar with the average spending in KW of a house in the USA. But I guess it's ridicules, there could be a thousand reason why it's high, at least give the a man a chance to explain.. verty Homework Helper He's a politician; politicians don't have to do what they say. chemisttree Homework Helper Gold Member He's a politician; politicians don't have to do what they say. Apparently, Al Gore buys his carbon offsets from... himself! He set up an investment company to invest in stocks (green of course) like solar, wind and hydrogen. These aren't even carbon offsets per se! How does he keep track of all of the carbon dioxide he is saving with that NONSENSE! http://www.ecotality.com/blog/?p=350 Last edited by a moderator: I am not saying that it is right for them to do this -- provided this is not explainable. I just woke up so I haven't read the whole article yet, but I can see how journalists would need a lot of air travel etc. . if it turns out that they are actually over-spending, even considering their "special situations," they should by all means be held accountable. my point is that the media should wait until we know all the facts, and that if the facts do turn out against them, to make sure that they handle the situation responsibly... because something like this could be a gift from god to oil-companies or anti-envioronmentalists, and we could end up back where we started. hey the guy is fair game and if hes burning $30,000 a year in energy bills/ shame on him. That said before I ditched my 4300 sq ft home built in 2002, I was spending close to 5K in energy even with low e windows that cost me a fortune, attic fan, etc and not EVEN running the AC in the summer in spite of 100+ temps. turbo Gold Member My wife and I own a small log house and we probably expend less than$1200/y (and are trying to reduce that!) for electricity. Of course, I am not a former vice president, and we do not reside in a 20-room mansion, we have no domestic staff, we do not have an extensive security system with perimeter lighting, we do not have frequent guests to cook for, or to house, nor do we have a huge underground room with grow-lamps....not that Al does. These people are living the life of luxury, as do most millionaires. If you honestly expect a wealthy politician, no matter how well-intentioned, to live life like a prole, you are badly misguided. Maybe it is misguided of me to think any politician would actually put their money where their mouth is, but he could ditch the mansion, build partly underground using autoclaved concrete, outfit with solar heating at the least, set up a small wind turbine if practical, and cut the bill by 1/3 to 1/2 I bet. turbo Gold Member Maybe it is misguided of me to think any politician would actually put their money where their mouth is, but he could ditch the mansion, build partly underground using autoclaved concrete, outfit with solar heating at the least, set up a small wind turbine if practical, and cut the bill by 1/3 to 1/2 I bet. You're absolutely right, and he should be able to cut it by much more than that if he wants to set a good example. He could ditch the mansion and live in a totally self-sufficient home, if he wanted. He has enough money to absorb the up-front costs of building a self-sufficient home. Not many politicians lead by example, though, especially when it might impact on their lifestyle. How many times do politicians jet all over the country when phone conferencing could have done the job for far less cost and far less environmental impact? Take Bush's trip to the south. Few people can begin to fathom the amount of energy a trip like that consumes. Fly Bush and his entourage down on Air Force One, have Marine One pre-positioned for regional mobility, pre-position Secret Service details for security before and during the visits, and cart everybody's butts around in heavy armored SUVs. It's ridiculous the amounts of taxpayer money and precious energy that are expended for these PR junkets, with no benefit to the taxpayers. Edited to add: I'm certain that if Gore were the president, he would be making similar trips. Air Force One is a huge perk and all the presidents seem to love to use it as often as they can. Last edited: chemisttree Homework Helper Gold Member Not many politicians lead by example, though, especially when it might impact on their lifestyle. How many times do politicians jet all over the country when phone conferencing could have done the job for far less cost and far less environmental impact? Why pick on George? He sounds like he is walking the walk that Al Gore only talks! Read for yourself... and vote republican! http://www.ecorazzi.com/?p=1601 turbo Gold Member Why pick on George? He sounds like he is walking the walk that Al Gore only talks! Read for yourself... and vote republican! http://www.ecorazzi.com/?p=1601 Did you actually read my post, in which I said that EVERY president does Air Force One junkets? I will not vote for Republican nor Democratic candidates based on their party affiliation - only based on which candidate I feel will best represent my interests. There was a time when Maine was served by Bill Cohen (R) and George Mitchell (D) in the Senate and I voted for each of them every time they ran. They both seemed to be able to reach across the aisle - something that is sadly lacking these days.
# Generate-changelog NumberFormatException Hi guys, I’m trying to use the generate-changelog command on my schema, but I’m getting this SEVERE error message: SEVERE [liquibase.integration] java.lang.NumberFormatException: Character ( is neither a decimal digit number, decimal point, nor “e” notation exponential mark. liquibase.exception.CommandExecutionException: liquibase.exception.LiquibaseException: Unexpected error running Liquibase: java.lang.NumberFormatException: Character ( is neither a decimal digit number, decimal point, nor “e” notation exponential mark. What does that exactly mean? Is it trying to parse a value? how can I know the column triggering that exception? Thanks a lot!!! Check for invalid characters on the default value for columns of type REAL. i.e. (0)::Real that was my case.
PUMPA - THE SMART LEARNING APP Helps you to prepare for any school test or exam Factors of $$9$$ are $$1$$, $$3$$ and $$9$$. Therefore it is a composite number. Factors of $$14$$ are $$1$$, $$2$$, $$7$$, and $$14$$. Therefore it is a composite number. Seven consecutive composite numbers less than $$100$$: $$89$$ and $$97$$ are the prime numbers. $$90$$, $$91$$, $$92$$, $$93$$, $$94$$, $$95$$, $$96$$ lies in between $$89$$ and $$97$$. All these numbers have more than $$2$$ factors. Therefore, all the numbers are consecutive composite numbers. • $$1$$ is neither prime nor composite number. • Any whole number greater than $$1$$ is either a prime or composite number.
# Math Help - Linear equation + graphing self solved 2. ## Re: Linear equation + graphing Originally Posted by jamesazz On this revision question there are two 'options' given: By the way, the question concerns a coat checking business, which explains the manner of the question. Option 1: $3 per item and$5 arrival fee Option 2: $5 per item and$1 arrival fee Write these as two linear equations in terms of A (amount) and I (Items). Then I need to graph these on an axis. And what have you tried? 3. ## Re: Linear equation + graphing I got it, don't worry.
You can create a .dbtignore file in the root of your dbt project to specify files that should be entirely ignored by dbt. The file behaves like a .gitignore file, using the same syntax. Files and subdirectories matching the pattern will not be read, parsed, or otherwise detected by dbt—as if they didn't exist. # .dbtignore# ignore individual .py filesnot-a-dbt-model.pyanother-non-dbt-model.py# ignore all .py files**.py# ignore all .py files with "codegen" in the filename*codegen*.py
Comment Share Q) # An element X belongs to Group 16 and $5^{th}$ period.Its atomic number is $\begin{array}{1 1}34\\50\\52\\85\end{array}$
## Introduction Under the action of the earthquake, the damage of the bridge affects the rescue after the disaster and brings difficulties to the recovery after the disaster1,2. Given bridge collision, the current research focuses on the collision of adjacent beams and the collision between the main girder and abutment, and some achievements have been made3. Many scholars set up different models to calculate the collision force and set up a reasonable device to reduce the structural collision force4,5,6,7. However, few studies on vertical collision, the especially eccentric collision between the main girder and bridge pier. Previous far-fault seismic monitoring data show that vertical seismic acceleration is less than horizontal8. Castelli et al.9 analyzed the interaction between soil and structure under vertical earthquakes. Button et al.10 and Wang et al.11 thought vertical seismic design is less important. With the improvement of monitoring level, more and more data show that the near-field vertical seismic acceleration amplitude is close to or even far beyond the horizontal seismic acceleration amplitude. The peak value of vertical and horizontal acceleration V/H of the Kobe earthquake in 1995 is close to 212. Thomas et al.13 conducted modal analysis on Tacoma bridge and found that ignoring vertical seismic component may bring greater risks, especially when the vertical excitation frequency of the ground is close to the natural frequency of the bridge. Borislav et al.14 discussed the influence of long term (LD) motion and near fault (NF) ground motion on seismic performance of seismic pier through numerical simulation, and compared with the response of short term far-field (FF) motion. A fiber-based nonlinear finite element bridge pier model is developed to evaluate the damage potential of different types of ground motion. Ayman et al.15 studied the velocity impact effect of bridge pier by using the method of near-fault impulse motion, in order to better understand the vibration dynamic behavior under forced vibration. Unlike foreign Bridges, most girder Bridges in China adopt rubber bearings and lack tensile strength16,17. The near-fault vertical earthquake may cause the main girder and bearing separation. To calculate vertical impact force, Yang et al.4 used the transient wave eigenfunction method to solve. However, Yang's study only considered the effect of separation on vertical impact but ignored the change of horizontal displacement of the bridge caused by separation. It is necessary to analyze the influence of the possible separation phenomenon on the damage of the horizontal bridge under the strong amplitude vertical earthquake motion. Previous collision studies mainly focused on the longitudinal collision of adjacent beams18,19,20. Recently, Yang used the continuum model to calculate the vertical impact force between the main girder and the pier. However, the above study only considers the vertical earthquake and ignores the influence of the vertical separation of piers and beams on the longitudinal dynamic response of piers. In this study, the influence of pier-beam separation on bridge force and displacement response are calculated by establishing a reliable theoretical method for dynamic bridge response. By using the mode superposition method, calculate the limit solution of the vertical impact force of the pier beam and the longitudinal deformation of the pier top after the first pier beam separation. The influence of pier beam separation on the pier failure model under different excitation frequencies was analyzed by calculating the vertical contact force of the pier beam and pier top offset. ## Theoretical model and vertical displacement calculation ### Theoretical model The calculation model selected in this study is a double-span continuous girder bridge. The calculation model is shown in Fig. 1. The main girder is a prestressed box girder, and both ends are articulated with the abutment. The bridge pier is double column round pier, the bottom and the foundation are just connected. The main beam and bridge pier are connected with plate rubber bearings. In the vertical and horizontal directions, the hysteresis curve of the support is long and narrow, and the damping of the support is ignored. In order to simplify the calculation of this study, the following assumptions are adopted in this paper: 1. (1) When the bridge is forced to resonate, the structural force and displacement response are always calculated by elastic deformation. 2. (2) Ignore the possible bearing shear failure caused by a horizontal earthquake. 3. (3) Ignore the difference in the arrival time of the horizontal and vertical seismic waves, assuming that the three direction earthquakes are excited simultaneously. 4. (4) It is assumed that the bridge is rigidly connected to the ground, and the coupling effect of soil and foundation is ignored. ### Theoretical solution of displacement response of bridge in longitudinal contact stage Longitudinal displacement field of the girder can be divided into static displacement, rigid body displacement and dynamic deformation. $$\begin{array}{*{20}c} \begin{gathered} X_{1} (x,t) = X_{1s} (x) + X_{1g} (x,t) + X_{1d} (x,t) \hfill \\ X_{2} (x,t) = X_{2s} (x) + X_{2g} (x,t) + X_{2d} (x,t) \hfill \\ \end{gathered} \\ {W(\xi ,t) = W_{s} (\xi ) + W_{g} (\xi ,t) + W_{d} (\xi ,t)} \\ \end{array}$$ (1) X is the displacement of the girder and W is the displacement of the pier. The subscripts s, g and d are static displacement rigid body displacement and dynamic displacement respectively. Static displacement and rigid body displacement of the bridge are as follows: $$X_{1s} (x) = X_{2s} (x) = W_{s} (\xi ) = 0$$ (2) $$X_{{{\text{1g}}}} (x,t) = X_{{{\text{2g}}}} (x,t) = W_{g} (\xi ,t) = D(t)$$ (3) The dynamic deformation part of the structure are as follows: \begin{aligned} X_{1d} (x,t) & = \sum\limits_{n = 1}^{\infty } {\varphi_{nb1} (x)q_{n} (t),} \quad X_{{{2}d}} (x,t) = \sum\limits_{n = 1}^{\infty } {\varphi_{{nb{2}}} (x)q_{n} (t)} \\ W_{d} (\xi ,t) & = \sum\limits_{n = 1}^{\infty } {\varphi_{nr} (\xi )q_{n} (t)} \end{aligned} (4) The equation includes the bending wave function $${\varphi }_{nb1},{\varphi }_{nb2}$$ of the girder, the longitudinal wave function $${\varphi }_{nr}$$ of the pier, and the time function $${q}_{n}(t)$$. By calculating boundary conditions and continuity conditions, obtain the wave functions of the bridge: $$\begin{array}{*{20}c} {\varphi_{nb1} (x) = A_{n1} \sin k_{bn} x + A_{n1} \tan k_{bn} L\cos k_{bn} x} \\ {\varphi_{nb2} (x) = - A_{n1} \sin k_{bn} x + A_{n1} \tan K_{bn} L\cos k_{bn} x} \\ {\varphi_{nr} (\xi ) = M_{n1} (\sin k_{rn} \xi - \sinh k_{rn} \xi ) + M_{n1} E_{n1} (\cos k_{rn} \xi - \cosh k_{rn} \xi )} \\ \end{array}$$ (5) where $${A}_{n1}$$, $${M}_{n1}$$ and $${E}_{n1}$$ are the wave function coefficients. The time function of the structure can be obtained by orthogonal consistency: $$\omega_{n}^{2} q_{n} (t) + 2\zeta \omega_{n} \dot{q}_{n} (t) + \ddot{q}_{n} (t) = \ddot{Q}_{n} (t)$$ (6) By Laplace transformation, $${q}_{n}(t)$$ can be obtained: \begin{aligned} q_{n} (t) & = e^{{ - \zeta \omega_{n} t}} \left( {q_{n} (0)\cos \omega_{d} t + \frac{{\dot{q}_{n} (0) + \zeta \omega_{n} q_{n} (0)}}{{\omega_{d} }}\sin \omega_{d} t} \right) \\ & \quad + \frac{1}{{\omega_{d} }}\int_{{0}}^{t} {e^{{ - \zeta \omega_{n} \tau }} \ddot{Q}_{n} (\tau )\sin (\omega_{d} (t - \tau ))d\tau } \\ \end{aligned} (7) where $$\zeta$$ is material damping. In Eq. (7), $${ \omega }_{d}=\sqrt{1-{\zeta }^{2}}{\omega }_{n}$$. In the separation stage, the girder and pier move at their respective frequencies, the classification of displacement response is consistent with the contact stage. $$\begin{array}{*{20}c} {\overline{X} (x,t) = \overline{X}_{s} (x) + \overline{X}_{g} (x,t) + \overline{X}_{d} (x,t)} \\ {\overline{W} (\xi ,t) = \overline{W}_{s} (\xi ,t) + \overline{W}_{g} (\xi ,t) + \overline{W}_{d} (\xi ,t)} \\ \end{array}$$ (8) The static displacement and rigid body displacement are consistent with the contact stage. The calculation process of wave function in the separation stage is the same as that in the contact stage. During the calculation time, the structure may be separated several times. In that case $${t}^{*}=t-{t}_{2k}$$ is the collision stage, $${t}^{*}=t-{t}_{2k+1}$$ is the separation stage. In the k-th separation process, the dynamic displacement responses of the main girder and pier are as follows: \begin{aligned} q_{nb} (t^{*} ) & = e^{{ - \zeta \omega_{b1} t^{*} }} \left( {q_{1b} (t_{2k + 1}^{ + } )\cos \omega_{b1} t^{*} + \frac{{\dot{q}_{n} (t_{2k + 1}^{ - } ) + \zeta \omega_{b1} q_{nb} (0)}}{{\omega_{b1} }}\sin \omega_{b1} t^{*} } \right) \\ & \quad + \frac{1}{{\omega_{bd} }}\int_{{t_{2k + 1}^{ + } }}^{{t^{*} }} {e^{{ - \zeta \omega_{bn} \tau }} \ddot{Q}_{bn} (\tau )\sin (\omega_{bd} (t^{*} - \tau ))d\tau } \\ q_{nr} (t^{*} ) & = e^{{ - \zeta \omega_{r1} t^{*} }} \left( {q_{1r} (t_{2k + 1}^{ + } )\cos \omega_{b1} t^{*} + \frac{{\dot{q}_{n} (t_{2k + 1}^{ - } ) + \zeta \omega_{r1} q_{nr} (0)}}{{\omega_{r1} }}\sin \omega_{r1} t^{*} } \right) \\ & \quad + \frac{1}{{\omega_{rd} }}\int_{{t_{2k + 1}^{ + } }}^{{t^{*} }} {e^{{ - \zeta \omega_{rn} \tau }} \ddot{Q}_{rn} (\tau )\sin (\omega_{rd} (t^{*} - \tau ))d\tau } \\ \end{aligned} (9) When the vertical relative displacement between the middle of the main beam and the top of the pier is less than zero, the structure contacts again. In the collision stage, the dynamic response of the girder and pier is the displacement response of the separation stage superimposed by the collision displacement response. The indirect mode superposition method is adopted to calculate the structural displacement response caused by collision21, and the specific formula are as follows: $$\begin{gathered} X_{F} (x,t) = \sum\limits_{n = 1}^{\infty } {\overline{\varphi }_{nb} (x)\left\{ \begin{gathered} e^{{ - \zeta \omega_{b1} t^{*} }} q_{nb} (t_{2k} )\cos \omega_{nb} (t - t_{2k} ) \hfill \\ + e^{{ - \zeta \omega_{b1} t^{*} }} \frac{{\dot{q}_{nb} (t_{2k} )}}{{\omega_{nb} }}\sin \omega_{nb} (t - t_{2k} ) \hfill \\ \int_{{t_{2k} }}^{t} {e^{{ - \zeta \omega_{rb} \tau }} \ddot{Q}_{nb} h_{nb} d\tau } \hfill \\ \end{gathered} \right\}} \hfill \\ W_{F} (\xi ,t) = \sum\limits_{n = 1}^{\infty } {\overline{\varphi }_{nr} (\xi )\left\{ \begin{gathered} e^{{ - \zeta \omega_{r1} t^{*} }} q_{nr} (t_{2k} )\cos \omega_{nr} (t - t_{2k} ) \hfill \\ + e^{{ - \zeta \omega_{r1} t^{*} }} \frac{{\dot{q}_{nr} (t_{2k} )}}{{\omega_{nr} }}\sin \omega_{nr} (t - t_{2k} ) \hfill \\ \int_{{t_{2k} }}^{t} {e^{{ - \zeta \omega_{rn} \tau }} \ddot{Q}_{nr} h_{nr} d\tau } \hfill \\ \end{gathered} \right\}} \hfill \\ \end{gathered}$$ (10) where $${Q}_{nb}, {Q}_{nr}$$ are the generalized collision forces. $${x}_{0}$$ and $${\xi }_{0}$$ are the coordinate of the collision points of the main beam and pier, respectively. $${h}_{nb}$$ and $${h}_{nr}$$ are the impact impulse response functions. ## Numerical simulation and analysis In this paper, a two-span prestressed continuous girder bridge in China is selected. See Fig. 2 for the specific section. Considering that the damping of the rubber bearing is very small, the hysteretic curve of the material is long and narrow, the structural damping is ignored. The rubber bearing is simulated by two springs, axial stiffness $${K}_{c}=2.4\times {10}^{9}\,{\text{N/m}}$$, and shear stiffness $${K}_{v}=2.4\times {10}^{6}\,{\text{N/m}}$$. The flexural vibration of the bridge, the damping coefficients of the main girder and pier are assumed to be $${\zeta }_{2}$$ = 2%. ### Influence of structural separation on horizontal displacement of bridges Since the natural period of the bridge is inconsistent in the vertical and horizontal directions, the seismic excitation period selected in this paper is close to the horizontal and vertical natural periods for analysis. To ensure the accuracy of the calculation, In addition, to avoid the complexity of calculation, the appropriate time-step increments must be selected. For the selection of time-step increments, it is necessary to clearly express the characteristics transmitted in the girder and pier, so a small time-step increase is required for analysis during calculation. The longitudinal wave velocity of the bridge pier is $${c}_{r}=\sqrt{{E}_{r}/{\rho }_{r}}$$ = 3492 m/s, and the bending wave velocity is $${a}_{r}=\sqrt{{E}_{r}{I}_{r}/{\rho }_{r}{A}_{r}}$$=1060 m/s. The maximum time-step increments must be less than the time for the bending wave and axial wave traveling across the entire pier, $$\Delta t<L/{c}_{r}$$ = 4.3e−3 s. Hence, time-step increments 1.0e−3 s are used for research. Another parameter to be considered in this study is the modal cutoff number of the wave. Considering the influence of damping on high-frequency vibration, the number of modes selected in this study is N = 5. Considering the separation condition, the horizontal displacement of the bridge is affected by the vertical displacement. Put the recorded time into the calculation of horizontal displacement, and obtain the horizontal displacement of the bridge under the condition of considering separation. The total computation time is 2 s. Figure 3 shows the model diagram of the seismic response of the bridge considering the separation condition. The straight line represents the displacement response in the contact stage, the origin represents the point of state change, and the curve represents the displacement response in the separation stage. Therefore, in the horizontal direction, when a vertical earthquake separates the main girder and pier, the structure is in a contact–separation–recontact state. Meeting and always in contact, separation may affect horizontal displacement. Figure 4 shows the seismic response of the bridge when T = 0.2 s. Figure 4a,b shows the horizontal displacement of the bridge under the condition of ignoring separation. In the lateral direction, the maximum relative displacement of the main girder and pier is 29.6 mm. In the longitudinal direction, the maximum relative displacement is 18.1 mm. However, in the vertical direction, the main girder and pier separation will occur when T = 0.2 s. Figure 4c shows that the bridge is separated six times in 2 s, and the maximum vertical impact force generated is 29.6 MN, which is 2.47 times the static force. Figure 4d,e shows the horizontal displacement of the bridge with separation considered. By comparing Fig. 4a,b, it can be seen that vertical separation greatly influences horizontal displacement response. Under the separation condition, the maximum lateral relative displacement increases from 29.6 to 42.77 mm, an increase of 44.5%. The maximum relative displacement increases from 18.1 to 25.26 mm in the longitudinal direction, increasing by 39.6%. In the horizontal direction, the maximum displacement in the lateral direction is greater than that in the longitudinal direction. This may be because the main girder and bridge pier have greater flexibility in the horizontal direction, while the main girder has greater stiffness in the longitudinal direction. Therefore, the effect of the possible separation on the structure should be considered in the near-fault earthquake. ### Effect of separation on eccentric compression Figure 5 shows the vertical impact force of the pier beam under different vertical excitation peak acceleration and excitation frequency. The results show that the collision mainly occurs when the excitation frequency is close to the natural vertical frequency of the bridge, and the closer the frequency is to the vertical, the greater the collision force is. In this study, the effect of vertical separation of pier and beam on pier failure is mainly considered, and the seismic excitation period selected is T = 0.2 s. In the analysis of the above model, the following assumptions are made for the convenience of calculation: 1. (1) Ignoring the time difference between vertical earthquake and horizontal earthquake, set it as simultaneous excitation. 2. (2) The elastic model is always used to calculate the displacement response of the structure, and the plastic deformation is ignored. 3. (3) When calculating the bending moment caused by eccentric impact, the coupling effect of pier vibration and bearing shear is ignored. 4. (4) For the possible second-order effect of the bridge pier, the method coefficient $$\eta$$ in the specification is substituted into the calculation. The collision diagram is shown in Fig. 6. The vertical collision force can be calculated $${\mathrm{M}}_{\mathrm{c}} = F\times\upeta \times \Delta$$. In the lateral direction $$\Delta$$ is the lateral impact eccentricity and in the longitudinal direction $$\Delta$$ is the longitudinal impact eccentricity, $$\upeta$$ is the eccentricity amplification factor. Figure 7 shows the bending moment generated by eccentric compression at the bottom of the pier. In the lateral direction, the bending moment caused by eccentric impact increases from 1.05, to 1.76 MN m. The longitudinal direction increased from 1.13 to 1.85 MN m. In the horizontal direction, the bending moment caused by eccentric impact increases from 1.41 to 2.5 MN m. Therefore, the eccentric collision caused by separation may increase the bending moment of the pier or even cause the pier failure. Figure 8 shows the variation of horizontal bending moment of bridge piers under three conditions. Ignored the separation of pier and beam caused by vertical seismic excitation, with the increase of $$\lambda$$(V/H) amplitude; the combined bending moment goes from 1.81 to 2.6 MN m by 43.7%. The increase of vertical force will enlarge the eccentric impact moment. When structural separation is considered, the expanded deformation of 0th pier roof also increases eccentricity, the bending moment increases from 2.61 to 3.4 MN m. Therefore, when considering vertical seismic action, it is necessary to consider the vertical force and the separation that may occur. ### Influence of pier length on structural failure Figure 9 shows the horizontal deformation of the pier top under different pier heights. The separation will increase the deformation at the top of the pier in both the lateral and longitudinal directions. But the trends in the two directions are different. In the lateral direction, the deformation of the pier top increases first and then decreases. In contrast, in the longitudinal direction, it increases monotonically. The reason for this situation is that the lateral natural vibration period of the bridge is affected by the main girder, while the longitudinal natural vibration period of the bridge is mainly affected by the pier. At the peak point of relative transverse displacement, the period of natural vibration of the bridge pier is close to the period of natural vibration of the bridge. In the longitudinal direction, the period of natural vibration of the pier is close to that of the bridge. The relative displacement increases with the increase of the pier height. ### Influence of girder span on structural failure Figure 10 shows the separation times of Bridges with different main girder spans. The seismic excitation periods given are T = 0.1 s, T = 0.2 s, and T = 0.3 s. The bridge separation occurs only when the seismic excitation period is close to the vertical natural vibration period. With the increase of the span of the main girder, the separation interval becomes narrower, and the number of separations becomes less and less. This is because the amplitude of V/H is higher in the short period interval and lower in the long period interval under the action of near-fault vertical earthquakes. To analyze the horizontal displacement of the bridge under different main girder spans, the seismic excitation period was chosen as T = 0.2 s. The main girder span of the three bridges is 25 m, 35 m, and 45 m (all the three bridges will separate when T = 0.2 s). Figure 11 shows the horizontal seismic response of the bridge with different spans. Laterally, with the increase of beam span, the influence of separation on the relative displacement of pier beam decreases gradually. When L = 45 m, the influence of separation on the relative displacement of the pier girder is very small. In the longitudinal direction, the separation increases the relative displacement of pier and beam regardless of the main girder size. Eccentric distance determines the bending moment generated by the eccentric impact on a bridge pier. In the longitudinal direction, with the span increase, whether the bridge is separated shows different trends and is nearly equal at last. In the lateral direction, the eccentricity increases monotonically with the increase of span. When the pier height is low, the eccentric impact mainly comes from the longitudinal direction. With the increase of the pier height, the influence of eccentric impact in the lateral direction gradually increases, and even exceeds the longitudinal direction. ### Finite element analysis The previous research calculates the bending of the pier under separation conditions through a theoretical solution. However, the coupling effect between vertical and horizontal is ignored in the theoretical solution. To verify the correctness of the theory, ANSYS modeling is used for comparative analysis. The two ends of the main beam are hinged points with three-dimensional constraints, and the bottom of the pier is a rigid node. BEAM 188 unit is adopted for the main beam and pier. For bridge bearings, different elements are used in the vertical and longitudinal directions. The vertical direction is set as LINK10 element, and the bearing height is ∆Z when not stressed. In the longitudinal direction, the COMBIN 14 element is adopted, and the spring stiffness is the shear stiffness of the support. The bearing is bonded with the pier and overlapped with the main beam. To simulate the pier-beam separation condition caused by near-fault vertical earthquake excitation, when the vertical relative displacement of the middle of the main beam and the top of the pier is greater than the support height ∆Z and shows an increasing trend, it means that the main beam and the pier are separated in the vertical direction. The longitudinal spring element is set as a dead element, and the main beam and the pier are not connected in the vertical direction. When the vertical relative displacement between the middle of the main beam and the top of the pier is less than the support height ∆Z and shows a decreasing trend, the main beam, and the pier collide vertically, the longitudinal spring element is a live element, and the main beam and the pier are connected. See Fig. 12 for a specific calculation. The main beam and pier are separated when the seismic excitation period T = 0.25 s and the vertical seismic peak acceleration is 0.6 g. Figure 13 shows the axial pressure of the pier under the finite element solution and theoretical solution. Under the theoretical solution, the maximum axial pressure on the pier is 16.8 MN, 2.8 times the static pressure. Under the finite element solution, the maximum axial pressure on the pier is 22.13 MN, 3.67 times the static pressure. The finite element solution is slightly larger than the theoretical solution. Due to the large impact force on the pier and the change of longitudinal deformation caused by vertical seismic excitation, the bending moment at the bottom of the pier increases significantly. At the same time, when considering the failure of longitudinal support restraint when the bridge separation is considered, the maximum bending moment at the bottom of the pier is 7.96 MN m, which is greater than 5.23 MN m when the failure of longitudinal restraint is ignored, which is 47.05% higher (Fig. 14). When the theoretical solution is adopted, and the longitudinal separation is ignored, the maximum bending moment at the bottom of the pier under the most unfavorable condition is 3.46 MN m. When considering separation, the maximum bending moment at the bottom of the pier is 6.74 MN m. Comparing the theoretical solution with the finite element solution, the following conclusions can be summarized: (1) with the increase of the amplitude of vertical seismic acceleration, the bending moment at the bottom of the pier also increases; (2) when the vertical excitation amplitude is large, ignoring the coupling effect has the risk of underestimating the bending of piers; (3) the vertical separation of the structure will increase the bending moment at the bottom of the pier and increase the risk of bending failure of the pier; (4) with the increase of vibration duration, the bending moment at the bottom of pier increases gradually. At the same time, the influence of separation on the flexural failure of piers is mainly concentrated in the section with large moment fluctuation. At this time, the pier amplitude is large; (5) when using finite element calculation, the axial force increases greatly after a small fluctuation, and the bending moment at the bottom of the pier also increases greatly. ## Conclusions This paper establishes a girder–spring–pier model to analyze the possible changes of main girders and piers under near-fault earthquake conditions. The influence of eccentric vertical impact on the pier is calculated by analyzing the deformation of the top of the pier caused by separation. By calculating the response of bridges of different sizes under earthquake, the following conclusions are drawn: 1. 1. Under near-fault earthquake, when the seismic excitation period is close to the vertical natural period of the bridge, the vertical separation of pier and beam may be caused. The larger the natural vibration period, the less the separation times and the smaller the separation interval. 2. 2. With the increase of V/H amplitude, the increase of bending moment caused by eccentric impact is not only from the increase of vertical contact force, but also from the increase of transverse and longitudinal deformation at the top of the pier. 3. 3. With the increase of pier height, the excitation acceleration required for pier and beam separation increases. The separation of piers and beams will affect the horizontal dynamic response of Bridges. In the transverse direction, the maximum deformation of pier top increases first and then decreases with the increase of pier height. In the longitudinal direction, the maximum deformation of pier crest increases monotonically. 4. 4. With the increase of girder span, the influence of separation on the longitudinal deformation of the pier top is not consistent. However, in the lateral direction, the maximum deformation increases with the increase of girder span regardless of separation. 5. 5. When the vertical excitation amplitude is large and the pier height is high, ignoring the horizontal and vertical coupling effect may underestimate the risk of pier bending.
# Omron Viper 850 Orientation out of range error Good morning folks We have a Omron Viper 850 robot It is in inverted mount, i.e mounted upside down as you can see in the photo. We are using the ACE 4.3 Software to program the robot. I don't have the code to paste here, but is something like: SPEED 20 ALWAYS DURATION 0.5 ALWAYS MOVE safe.pos MOVES pos2 MOVEC pos3,pos4 The properly code are much bigger than it. But this fit as an example, because we are using basically these five instructions. When we ran the code, the robot sometimes make the course, sometimes not. When not, we receive the error message "Orientation Out of Range". We suspect that we have to include some instruction to says to the robot that it are working in upside down orientation. The manual ev+ programming pdf is here. I appreciate any idea to workaround this error. • I'm not experienced with industrial robot arm but I have some experience on designing the controller, could you get the value of joint 4 5 6 when error happen? it seems like spherical wrist singularity, try to give offset around 0-5 degree on joint 5 and 6 and inform if there is any difference. Jul 10 at 10:14 • Sorry, i didnt realize what you mean by give offset around 0-5 degree. Could you give more detailed information? Jul 10 at 10:28 • For example before command MOVES, try to increase the joint 5 and 6 by 5 degree Jul 10 at 10:34 • Right now i am a few kilometers away from the robot to realize the test. I ve already moved around the joints 4 5 and 6 before, but with no success. I think that i may use the BASE or FRAME instruction to change the orientation. What do you think? Jul 10 at 10:57 • I'm sorry with my limited knowledge I cant help any further.. but you can take a look at this article robohub.org/… maybe give u any useful information. Jul 10 at 11:31
United States of AmericaPA High School Core Standards - Algebra I Assessment Anchors # 3.04 Literal equations and formulas ## Interactive practice questions Conversion from Fahrenheit to Celsius is defined by the formula $C=\frac{5}{9}\left(F-32\right)$C=59(F32). The temperature inside a freezer is $86^\circ$86°$F$F. What is this temperature in Celsius? Easy Approx a minute Sally bought a television series online. The television series was $7.2$7.2 gigabytes large and took $2$2 hours to download. The area of triangle is defined by formula $A=\frac{b\times h}{2}$A=b×h2. Find the height $h$h of a triangle with Base=$5$5 and Area=$30$30. The volume of a prism is given by the formula $V=a\times b\times h$V=a×b×h. Find $h$h if $a=2$a=2, $b=6$b=6 and $V=108$V=108.
# How do you divide (5x^3+11x^2+26x+26)div(5x+6) using synthetic division? Oct 1, 2017 ${x}^{2} + x + 4 + \frac{2}{5 x + 6}$ #### Explanation: To perform synthetic division, first find the divisor and set it equal to 0 to determine the number we will use in our synthetic division for the multiplication steps: $5 x + 6 = 0$ $5 x = - 6$ $x = - \frac{6}{5}$ Write this number at the left edge of your paper, and put a slight dividing box next to it to isolate it from the next numbers we will write. Next to that number, spaced out a bit, write down all of the coefficients of the dividend polynomial, starting with the highest degree term, and proceeding all the way down to the constant term. Note: If there is a term of $x$ missing, be sure to write a 0 for the coefficient! (This is not the case in this problem, though.) $- \frac{6}{5} \rfloor \textcolor{w h i t e}{\text{aaaaaa")5color(white)("aaaaaa")11color(white)("aaaaaaa")26color(white)("aaaaaaa}} 26$ Leave some vertical space and draw a horizontal line, much like down for addition problems: $- \frac{6}{5} \rfloor \textcolor{w h i t e}{\text{aaaaaa")5color(white)("aaaaaa")11color(white)("aaaaaaa")26color(white)("aaaaaaa}} 26$ $\textcolor{w h i t e}{\text{aaaaaaaaaa}}$ $\textcolor{w h i t e}{\text{aaaaaaaaaa")ul(color(white)("aaaaaaaaaaaaaaaaaaaaaaaaaaaa}}$ Lastly, copy down the first coefficient (5) and write it below the line: $- \frac{6}{5} \rfloor \textcolor{w h i t e}{\text{aaaaaa")5color(white)("aaaaaa")11color(white)("aaaaaaa")26color(white)("aaaaaaa}} 26$ $\textcolor{w h i t e}{\text{aaaaaaaaaa}}$ $\textcolor{w h i t e}{\text{aaaaaaaaaa")ul(color(white)("aaaaaaaaaaaaaaaaaaaaaaaaaaaa}}$ $\textcolor{w h i t e}{\text{aaaaaaaaaaa}} 5$ From here on out, you perform three steps repeatedly until you finish the last column of numbers: 1) Multiply the last number written below the line by the number in the box to the upper-left. 2) Write that product in the next column to the right, just above the line. 3) Add the two numbers in the next column and write the sum underneath the line. $- \frac{6}{5} \rfloor \textcolor{w h i t e}{\text{aaaaaa")5color(white)("aaaaaa")11color(white)("aaaaaaa")26color(white)("aaaaaaa}} 26$ $\textcolor{w h i t e}{\text{aaaaaaaaaa")color(blue)(-6)color(white)("aaaaa")color(green)(-6)color(white)("aaaaa}} \textcolor{b l u e}{- 24}$ $\textcolor{w h i t e}{\text{aaaaaaaaaa")ul(color(white)("aaaaaaaaaaaaaaaaaaaaaaaaaaaa}}$ $\textcolor{w h i t e}{\text{aaaaaaaaaaa")5color(white)("aaaaaaa")color(blue)(5)color(white)("aaaaaaa")color(green)(20)color(white)("aaaaaaa}} \textcolor{red}{2}$ As you can see, we began with 5, and then multiplied that by $- \frac{6}{5}$ to get $- 6$, which we wrote under the $11$. The sum $11 + \left(- 6\right)$ is $5$, which we write under the line. We then repeat again and again until we write the final value $2$ (colored in red here for emphasis). All that remains is writing the result. The last value (in red) represents the remainder of the division, and should be written over the divisor $5 x + 6$. The other numbers under the line are all coefficients of the "whole" part of the resulting polynomial. There is one trick remaining: Since the original divisor was $5 x$, each of these values are currently 5 times their final values. Reading from right to left these represent the constant term, $x$ term, and ${x}^{2}$ term, all of which should be divided by 5 before being written in the final result. Thus: ${x}^{2} + x + 4 + \frac{2}{5 x + 6}$
# Blackboard problem with polynomial The blackboard contains the two numbers 0 and 120. Each minute you are allowed to write an additional number $x$ on the blackboard if $x$ has not been written before on the blackboard and if $x$ is the integer root of some polynomial $P(x)$ in one variable whose coefficients are numbers on the blackboard. The game ends when you can not write further numbers on the blackboard. Which numbers will be written on the blackboard when the game ends? • May the new X be any integer or is it limited in some range? – 4386427 Nov 23 '15 at 18:03 • Is this supposed to have some neat trick or do can I just simulate it? Am I only expected to record unique values (not 12 122s or something)? – kaine Nov 23 '15 at 18:08 • @StillLearning The new x may be any integer, no limits. – Haobin Nov 23 '15 at 18:17 • You should exclude the zero polynomial. – f'' Nov 23 '15 at 18:57 • To clarify: do you mean a polynomial of the form $c_0+c_1x+c_2x^2+\dots+c_nx^n$, where $c_0 \dots c_n$ are on the board, and $n > 0$, and $c_n \neq 0$? – Lynn Nov 23 '15 at 19:04 By the rational root theorem, any integer root of the polynomial $$a_nx^n + \dots + a_1x + a_0$$ is a divisor of its constant term $a_0$, if $a_0 \neq 0$. We will never let $a_0 = 0$, since that means we can divide the whole polynomial by $x$ (repeatedly, if necessary; yielding a root $0$ each time, which is initially already on the board) and get a polynomial (with the same non-zero roots) for which $a_0 \neq 0$. Thus initially, $a_0 = 120$, our only choice. From this, suppose we get a new root $p$, a divisor of $a_0 = 120$. Then the next minute, we can choose from $\{p, 120\}$, both divisors of 120, etc. Clearly, the integer roots we generate by this process will always be divisors of 120, positive or negative. We show how to achieve all divisors of 120: • First, $120x+120$ gives us $x=-1$. • Then, $(-1)x^{120}+(-1)x^{119}+\dots+(-1)x^1+120$ gives us $x=1$. • Then, $x+120$ gives us $x=-120$. We can get $2$, $3$ and $5$ using only the coefficients in $\{\pm 120, \pm 1\}$: • $-x^7 + x^3 \pm 120$ gives us $x = \pm 2$. • $-x^4 -x^3 -x^2 -x \pm 120$ gives us $x = \pm 3$. • $-x^3 +x \pm 120$ gives us $x = \pm 5$. Aravind found a nice trick -- now that we have $2$, $3$, and $5$ on the blackboard, the other roots follow by dividing from $120 = 2^3 \cdot 3 \cdot 5$. (If we have $p$ and $q$ on the board, we can use $px-q$ to get $x=q/p$, so first take $q=120$ and then repeat the process with $q \to q/p$ for a prime $p$.) We can conlude that the solution set is precisely the set of positive and negative divisors of 120. • Cool. Best answer. Thanks for the link. – 4386427 Nov 23 '15 at 19:47 Every integer can be written on the board. First polynomial: $$P(x) = 120x - 120$$ The root of this polynomial is 1. Any integer: $$P(x) = x - x$$ In this solution, it is important to note that constants are not coefficients. Also note that $-$ is an operator, not part of the coefficient. Had it been intended as the coefficient, I would have written $x + -x$ (or even more explicitly $1x + -1x$). Edit: As an added shortcut (thanks to f''), you could simply use this polynomial from the initial board to retrieve any value for $x$: $$P(x) = 0x$$ • How did you let -120 be a coefficient? – Aravind Nov 23 '15 at 18:35 • Is it valid to use -120 ? It is not on the board... – 4386427 Nov 23 '15 at 18:35 • You guys both didn't read the final sentence in the answer. – Ian MacDonald Nov 23 '15 at 18:35 • Only polynomials in one variable imsc.res.in/~kapil/geometry/prereq1/node5.html – Haobin Nov 23 '15 at 18:36 • @Haobin: there is only one variable in the polynomials above. The $y$ that I have used is a construction for the reader, not the participants of the game. – Ian MacDonald Nov 23 '15 at 18:37 I believe the OP's intended answer is: Exactly the divisors of 120. This is true of the non-zero numbers on the board at any stage. If $P(x)=0$, then we can write it as $xg(x)+d=0$ and $d$ is a divisor of 120, this implies that so is $x$. StillLearning's answer shows that all of them can be obtained. Once $2,3,5$ are obtained, other divisors of 120 can be obtained by repeated division, eg: $2x-120=0$ to get $60$, then $2x-60-0$ to get $30$ etc. I started like this \begin{align} 120x + 120 = 0 &\rightarrow& x &= &-1\\ - x^{120} - x^{119} - ... - x^2 - x^1 +120 = 0 &\rightarrow&x &= &1\\ x + 120 = 0 &\rightarrow& x& = &-120\\ x^7 - x^3 + 120 = 0 &\rightarrow& x &= &-2\\ x - 2 = 0 &\rightarrow& x &= &2\\ \end{align} Now we have $-120, -2, -1, 0, 1, 2, 120$. Having both $-1$ and $1$ means that I can always get the negated value of any new number. But I can't crack this puzzle as I keep finding new numbers. Wonder if I got it all wrong... Edit: Some more (thanks to Aravind for the first 3)... \begin{align} x^2 + 2x - 120 = 0 &\rightarrow& x &= &10\\ 2x - 10 = 0 &\rightarrow& x &= &5\\ 10x - 120 = 0 &\rightarrow& x &= &12\\ x^4 + x^3 + x^2 + x - 120 = 0 &\rightarrow& x &= &3\\ \end{align} And \begin{align} 2x - 120 = 0 &\rightarrow& x &= &60\\ 3x - 120 = 0 &\rightarrow& x &= &40\\ 5x - 120 = 0 &\rightarrow& x &= &24\\ 5x^4 + 10x - 120 &\rightarrow& x &= &4\\ 4x - 120 = 0 &\rightarrow& x &= &30\\ 4x^2 + 4x -120 &\rightarrow& x &= &5\\ 5x -120 = 0 &\rightarrow& x &= &24\\ 3x^2 + 2x - 120 = 0 &\rightarrow& x &= &6\\ 6x - 120 = 0 &\rightarrow& x &= &20\\\\ -2x^2 - x + 120 = 0 &\rightarrow& x &= &-8\\\\ -8x + 120 = 0 &\rightarrow& x &= &15\\ \end{align} So far I have $\{0, 1, 2, 3, 4, 5, 6, 8, 10, 12, 15, 20, 24, 30, 40, 60, 120\}$ (and all the negated numbers as well). As far as I understand from other answers/comments this should be the complete list. I still wonder how that is proved. • You can only get divisors of 120. Can you get them all? – Aravind Nov 23 '15 at 18:51 • Hmm... I see... I'll give it a try... 3 should be next then... – 4386427 Nov 23 '15 at 18:52 • You can get 10 because 10 times 12 is 120 and you have 1,2 already. – Aravind Nov 23 '15 at 18:53 • 2x-10=0 gives 5, 10x-120=0 gives 12. – Aravind Nov 23 '15 at 18:56 • @Aravind - I don't have 10 on the board yet – 4386427 Nov 23 '15 at 18:58
27 min read # Insert HTML Widgets in Blogdown ## Summary Blogdown & Hugo allow for the insertion of html widgets like leaflet. This is a huge time saver because one doesn’t need to learn java and can instead stay within the R environment. # Why Blogdown? There were two main reasons for moving to blogdown. First, I needed a blogging platform that moved seemlessly between R code and its output. It had to be easy to show the connection between the code that was used and its effect on the data. Put the two side-by-side and people can understand what happened. It’s how I learn off the console. The second reason was that blogdown offered the promise of embedding html widgets into a post. The htmlwidgets package is so important in that it prevents me from having to learn java. Don’t get me wrong, java is important, but I can barely get around in R and want to stay focused on working with data, not learning new computer languages. This proved to be difficult, because there were no less than four hugo-themes that I tried, but kept having problems with the display in a webpage. It was frustrating to say the least. Not being able to embed dynamic tables into a website is a dealbreaker for me. Yes, my writing is more dependent, for example, on plots. But a dynamic table in the modern internet world just seems to be a necessity for readers. That’s why I’ve started this article by using the DT package in placing the mtcars dataset into a table. It worked! Also, find other examples of html embeds below. For more information on the htmlwidgets package, see Rstudio’s page # Load Libraries my.pkgs <- c("leaflet", "widgetframe", "htmlwidgets", "magrittr", "sigma", "DT", "ggplot2", "plotly") lapply(my.pkgs, function(x) suppressPackageStartupMessages(library(x, character.only = T))) # Insert Data Tables or DT Widget # see https://rstudio.github.io/DT/ DT::datatable(iris) # Insert Leaflet Widget # https://bhaskarvk.github.io/widgetframe/ #wrap widget in <iframe> tags m <- leaflet() %>% addTiles() %>% fitBounds(0, 40, 10, 50) %>% # move the center to Evansville setView(-87.569908, 37.975712, zoom = 15) frameWidget(m) # Insert Sigma JS data <- system.file("examples/ediaspora.gexf.xml", package = "sigma") sigma(data) # Insert Plotly Widget #https://www.htmlwidgets.org/showcase_plotly.html p <- ggplot(data = diamonds, aes(x = cut, fill = clarity)) + geom_bar(position = "dodge") ggplotly(p)
# Chess combination On the chess board 8x8, we have 8 Rooks that , each Rook don't atack the other. How many situation if we give out: a) 1 main diagonal b) 2 main diagonals That's mean we can't put the Rook to the cells of diagonal - I'm too young to understand the solution what you means . I need the solution with combination or conformable, a simple solution –  LevanDokite Sep 20 '12 at 7:19 In general, if you want to count the number of ways of placing $n$ non-attacking rooks on an $n \times n$ chessboard (with some squares that the rooks must avoid), you can do it by calculating the permanent of a $(0,1)$-matrix: we place a $1$ wherever a rook is allowed to go, and a $0$ wherever the rook is not allowed to go. Calculating the permanent of $(0,1)$-matrices is a #P-complete problem. So we cannot, in general, expect a simple "one size fits all" formula for these problems. Here's GAP code that solves the counting problem: Permanent( [[0,1,1,1,1,1,1,1], [1,0,1,1,1,1,1,1], [1,1,0,1,1,1,1,1], [1,1,1,0,1,1,1,1], [1,1,1,1,0,1,1,1], [1,1,1,1,1,0,1,1], [1,1,1,1,1,1,0,1], [1,1,1,1,1,1,1,0]] ); gives 14833 which is the number of derangements (as previously mentioned; see also Sloane's A000166). It is also the number of $2 \times 8$ Latin rectangles whose first row is $(0,1,\ldots,7)$. In this instance, there's fairly simple formulae (see the links). For the second question: Permanent( [[0,1,1,1,1,1,1,0], [1,0,1,1,1,1,0,1], [1,1,0,1,1,0,1,1], [1,1,1,0,0,1,1,1], [1,1,1,0,0,1,1,1], [1,1,0,1,1,0,1,1], [1,0,1,1,1,1,0,1], [0,1,1,1,1,1,1,0]] ); yields 4752 possibilities. This is also the number of $3 \times 8$ Latin rectangles whose first row is $(0,1,\ldots,7)$ and second rows is $(7,6,\ldots,0)$. - 4752 is also the number of ways to place 3 non-attacking knights on a $6\times6$ board, according to oeis.org/A172134. Coincidence, presumably. The entry directly relevant to this question is oeis.org/A003471 –  Gerry Myerson Sep 20 '12 at 1:14 For (a), you want to count the permutations $\pi$ of $[1 \ldots 8]$ such that have no fixed point, i.e. $\pi(i) \ne i$ for all $i$. These are also known as derangements. See http://en.wikipedia.org/wiki/Derangement EDIT: For (b), see http://oeis.org/A003471 -
### Home > INT2 > Chapter 6 > Lesson 6.2.1 > Problem6-56 6-56. Examine the triangle shown at right. Solve for $x$ twice, using two different methods. Show your work for each method clearly. Use the $45^{\circ}\text{-}45^{\circ}\text{-}90^{\circ}$ triangle pattern. Another method is to use the Pythagorean Theorem. $x = 8 \sqrt{2} \approx 11.3$ units
# What is the chemical composition of the grignard reagent? What is the actual chemical composition of the grignard reagent and what is the method of preparation? Can it be prepared in college laboratories? A Grignard reagent is an organomagnesium halide of the generic form $\ce{RMgX}$, where $\ce{R}$ is a hydrocarbon group and $\ce{X}$ is a halide, usually $\ce{Cl}$, $\ce{Br}$, or $\ce{I}$. $$\ce{PhBr(C6H5Br) + Mg ->[][\ce{Et2O}]PhMgBr}$$
SKY-MAP.ORG 홈 시작하기 To Survive in the Universe News@Sky 천체사진 컬렉션 포럼 Blog New! 질문및답변 출판 로그인 # 49 Cet 내용 ### 사진 사진 업로드 DSS Images   Other Images ### 관련 글 Spitzer IRS Spectroscopy of IRAS-discovered Debris DisksWe have obtained Spitzer Space Telescope Infrared Spectrograph (IRS)5.5-35 μm spectra of 59 main-sequence stars that possess IRAS 60μm excess. The spectra of five objects possess spectral features thatare well-modeled using micron-sized grains and silicates withcrystalline mass fractions 0%-80%, consistent with T Tauri and HerbigAeBe stars. With the exception of η Crv, these objects are youngwith ages <=50 Myr. Our fits require the presence of a cool blackbodycontinuum, Tgr=80-200 K, in addition to hot, amorphous, andcrystalline silicates, Tgr=290-600 K, suggesting thatmultiple parent body belts are present in some debris disks, analogousto the asteroid and Kuiper belts in our solar system. The spectra forthe majority of objects are featureless, suggesting that the emittinggrains probably have radii a>10 μm. We have modeled the excesscontinua using a continuous disk with a uniform surface densitydistribution, expected if Poynting-Robertson and stellar wind drag arethe dominant grain removal processes, and using a single-temperatureblackbody, expected if the dust is located in a narrow ring around thestar. The IRS spectra of many objects are better modeled with asingle-temperature blackbody, suggesting that the disks possess innerholes. The distribution of grain temperatures, based on our blackbodyfits, peaks at Tgr=110-120 K. Since the timescale for icesublimation of micron-sized grains with Tgr>110 K is afraction of a Myr, the lack of warmer material may be explained if thegrains are icy. If planets dynamically clear the central portions ofdebris disks, then the frequency of planets around other stars isprobably high. We estimate that the majority of debris disk systemspossess parent body masses, MPB<1 M⊕. Thelow inferred parent body masses suggest that planet formation is anefficient process.Based on observations with the NASA Spitzer Space Telescope, which isoperated by the California Institute of Technology for NASA. Nearby Debris Disk Systems with High Fractional Luminosity ReconsideredBy searching the IRAS and ISO databases, we compiled a list of 60 debrisdisks that exhibit the highest fractional luminosity values(fd>10-4) in the vicinity of the Sun (d<120pc). Eleven out of these 60 systems are new discoveries. Special carewas taken to exclude bogus disks from the sample. We computed thefractional luminosity values using available IRAS, ISO, and Spitzer dataand analyzed the Galactic space velocities of the objects. The resultsrevealed that stars with disks of high fractional luminosity oftenbelong to young stellar kinematic groups, providing an opportunity toobtain improved age estimates for these systems. We found thatpractically all disks with fd>5×10-4 areyounger than 100 Myr. The distribution of the disks in the fractionalluminosity versus age diagram indicates that (1) the number of oldsystems with high fd is lower than was claimed before, (2)there exist many relatively young disks of moderate fractionalluminosity, and (3) comparing the observations with a currenttheoretical model of debris disk evolution, a general good agreementcould be found. Optical polarimetry of infrared excess starsWe present UBRVI polarimetry measurements for a group of 38 IRASinfrared excess stars and complement these observations with V-band datataken from the literature for 87 additional objects. After correctingthe observed values by the interstellar contribution, we find that 48%of the analyzed sample has polarization excess. In addition, thepolarization of these stars may correlate with infrared color excesses,particularly at 60 and 100 μm. We caution, however, that poor IRASdata quality at longer wavelengths affects this correlation. We analyzethe wavelength dependence of the linear polarization of 15 polarizedobjects in relation to Serkowski's empirical interstellar law. We findthat for 6 to 7 objects (depending on the interstellar model) themeasured polarization differs significantly from the empiricalinterstellar law, suggesting an intrinsic origin. We analyze thepolarimetry distribution of IRAS infrared excess objects in relation tothe Exoplanet host stars (i.e., stars associated with at least onelikely planetary mass object). The corresponding polarimetrydistributions are different within a high confidence level. Finally, wecompare the metallicity distributions of F and G IRAS infrared excess,Exoplanet host and field main sequence stars, and find that F-G IRASinfrared excess objects have metallicities quite similar (although notidentical) to field main sequence stars and significantly different fromthe Exoplanet host group. Multi-aperture photometry of extended IR sources with ISOPHOT. I. The nature of extended IR emission of planetary NebulaeContext: .ISOPHOT multi-aperture photometry is an efficient method toresolve compact sources or to detect extended emission down torelatively faint levels with single detectors in the wavelength range 3to 100 μm. Aims: .Using ISOPHOT multi-aperture photometry andcomplementary ISO spectra and IR spectral energy distributions wediscuss the nature of the extended IR emission of the two PNe NGC 6543and NGC 7008. Methods: .In the on-line appendix we describe thedata reduction, calibration and interpretation methods based on asimultaneous determination of the IR source and background contributionsfrom the on-source multi-aperture sequences. Normalized profiles enabledirect comparison with point source and flat-sky references. Modellingthe intensity distribution offers a quantitative method to assess sourceextent and angular scales of the main structures and is helpful inreconstructing the total source flux, if the source extends beyond aradius of 1 arcmin. The photometric calibration is described and typicalaccuracies are derived. General uncertainty, quality and reliabilityissues are addressed, too. Transient fitting to non-stabilised signaltime series, by means of combinations of exponential functions withdifferent time constants, improves the actual average signals andreduces their uncertainty. Results: .The emission of NGC 6543 inthe 3.6 μm band coincides with the core region of the optical nebulaand is homogeneously distributed. It is comprised of 65% continuum and35% atomic hydrogen line emission. In the 12 μm band a resolved butcompact double source is surrounded by a fainter ring structure with allemission confined to the optical core region. Strong line emission of[ArIII] at 8.99 μm and in particular [SIV] at 10.51 μm shapes thisspatial profile. The unresolved 60 μm emission originates from dust.It is described by a modified (emissivity index β = 1.5) blackbodywith a temperature of 85 K, suggesting that warm dust with a mass of 6.4× 10-4 Mȯ is mixed with the ionisedgas. The gas-to-dust mass ratio is about 220. The 25 μm emission ofNGC 7008 is characterised by a FWHM of about 50´´ with anadditional spot-like or ring-like enhancement at the bright rim of theoptical nebula. The 60 μm emission exhibits a similar shape, but isabout twice as extended. Analysis of the spectral energy distributionsuggests that the 25 μm emission is associated with 120 K warm dust,while the 60 μm emission is dominated by a second dust component with55 K. The dust mass associated with this latter component amounts to 1.2× 10-3 Mȯ, significantly higher thanpreviously derived. The gas-to-dust mass ratio is 59 which, compared tothe average value of 160 for the Milky Way, hints at dust enrichment bythis object. CO emission from discs around isolated HAeBe and Vega-excess starsWe describe results from a survey for J = 3-2 12CO emissionfrom visible stars classified as having an infrared excess. The line isclearly detected in 21 objects, and significant molecular gas(>=10-3 Jupiter masses) is found to be common in targetswith infrared excesses >=0.01 (>=56 per cent of objects), but rarefor those with smaller excesses (~10 per cent of objects).A simple geometrical argument based on the infrared excess implies thatdisc opening angles are typically >=12° for objects with detectedCO; within this angle, the disc is optically thick to stellar radiationand shields the CO from photodissociation. Two or three CO discs have anunusually low infrared excess (<=0.01), implying the shielding discis physically very thin (<=1°).Around 50 per cent of the detected line profiles are double-peaked,while many of the rest have significantly broadened lines, attributed todiscs in Keplerian rotation. Simple model fits to the line profilesindicate outer radii in the range 30-300 au, larger than found throughfitting continuum SEDs, but similar to the sizes of debris discs aroundmain-sequence stars. As many as five have outer radii smaller than theSolar System (50 au), with a further four showing evidence of gas in thedisc at radii smaller than 20 au. The outer disc radius is independentof the stellar spectral type (from K through to B9), but there isevidence of a correlation between radius and total dust mass. Also themean disc size appears to decrease with time: discs around stars of age3-7 Myr have a mean radius ~210 au, whereas discs of age 7-20 Myr are afactor of three smaller. This shows that a significant mass of gas (atleast 2 M⊕) exists beyond the region of planetformation for up to ~7 Myr, and may remain for a further ~10Myr withinthis region.The only bona fide debris disc with detected CO is HD9672; this shows adouble-peaked CO profile and is the most compact gas disc observed, witha modelled outer radius of 17 au. In the case of HD141569, detailedmodelling of the line profile indicates gas may lie in two rings, withradii of 90 and 250 au, similar to the dust structure seen in scatteredlight and the mid-infrared. In both AB Aur and HD163296 we also findthat the sizes of the molecular disc and the dust scattering disc aresimilar; this suggests that the molecular gas and small dust grains areclosely co-located. Formation and Evolution of Planetary Systems: Upper Limits to the Gas Mass in HD 105We report infrared spectroscopic observations of HD 105, a nearby (~40pc) and relatively young (~30 Myr) G0 star with excess infraredcontinuum emission, which has been modeled as arising from an opticallythin circumstellar dust disk with an inner hole of size >~13 AU. Wehave used the high spectral resolution mode of the Infrared Spectrometer(IRS) on the Spitzer Space Telescope to search for gas emission linesfrom the disk. The observations reported here provide upper limits tothe fluxes of H2 S(0) 28 μm, H2 S(1) 17 μm,H2 S(2) 12 μm, [Fe II] 26 μm, [Si II] 35 μm, and [SI] 25 μm infrared emission lines. The H2 line upper limitsplace direct constraints on the mass of warm molecular gas in the disk:M(H2)<4.6, 3.8×10-2, and3.0×10-3 MJ at T=50, 100, and 200 K,respectively. We also compare the line flux upper limits to predictionsfrom detailed thermal/chemical models of various gas distributions inthe disk. These comparisons indicate that if the gas distribution has aninner hole with radius ri,gas, the surface density at thatinner radius is limited to values ranging from <~3 g cm-2at ri,gas=0.5 AU to 0.1 g cm-2 atri,gas=5-20 AU. These values are considerably below the valuefor a minimum mass solar nebula, and suggest that less than 1 Jupitermass (MJ) of gas (at any temperature) exists in the 1-40 AUplanet-forming region. Therefore, it is unlikely that there issufficient gas for gas giant planet formation to occur in HD 105 at thistime. 8-13 μm Spectroscopy of Young Stellar Objects: Evolution of the Silicate FeatureSilicate features arising from material around pre-main-sequence starsare useful probes of the star and planet formation process. In order toinvestigate possible connections between dust processing and diskproperties, 8-13 μm spectra of 34 young stars, exhibiting a range ofcircumstellar environments and including spectral types A-M, wereobtained using the Long Wavelength Spectrometer at the W. M. KeckObservatory. The broad 9.7 μm amorphous silicate (SiO stretching)feature that dominates this wavelength regime evolves from absorption inyoung, embedded sources, to emission in optically revealed stars, and tocomplete absence in older debris'' disk systems for both low- andintermediate-mass stars. This is similar to the evolutionary patternseen in Infrared Space Observatory (ISO) observations ofhigh/intermediate-mass young stellar objects (YSOs). The peak wavelengthand FWHM are centered about 9.7 and ~2.3 μm, respectively,corresponding to amorphous olivine, with a larger spread in FWHM forembedded sources and in peak wavelength for disks. In a few of ourobjects that have been previously identified as class I low-mass YSOs,the observed silicate feature is more complex, with absorption near 9.5μm and emission peaking around 10 μm. Although most of theemission spectra show broad classical features attributed to amorphoussilicates, small variations in the shape/strength may be linked to dustprocessing, including grain growth and/or silicate crystallization. Forsome of the Herbig Ae stars in the sample, the broad emission featurehas an additional bump near 11.3 μm, similar to the emission fromcrystalline forsterite seen in comets and the debris disk βPictoris. Only one of the low-mass stars, Hen 3-600A, and one Herbig Aestar, HD 179218, clearly show strong, narrow emission near 11.3 μm.We study quantitatively the evidence for evolutionary trends in the 8-13μm spectra through a variety of spectral shape diagnostics. Based onthe lack of correlation between these diagnostics and broadband infraredluminosity characteristics for silicate emission sources, we concludethat although spectral signatures of dust processing are present, theycannot be connected clearly to disk evolutionary stage (for opticallythick disks) or optical depth (for optically thin disks). Thediagnostics of silicate absorption features (other than the centralwavelength of the feature), however, are tightly correlated with opticaldepth and thus do not probe silicate grain properties. Classification of Spectra from the Infrared Space Observatory PHT-S DatabaseWe have classified over 1500 infrared spectra obtained with the PHT-Sspectrometer aboard the Infrared Space Observatory according to thesystem developed for the Short Wavelength Spectrometer (SWS) spectra byKraemer et al. The majority of these spectra contribute to subclassesthat are either underrepresented in the SWS spectral database or containsources that are too faint, such as M dwarfs, to have been observed byeither the SWS or the Infrared Astronomical Satellite Low ResolutionSpectrometer. There is strong overall agreement about the chemistry ofobjects observed with both instruments. Discrepancies can usually betraced to the different wavelength ranges and sensitivities of theinstruments. Finally, a large subset of the observations (~=250 spectra)exhibit a featureless, red continuum that is consistent with emissionfrom zodiacal dust and suggest directions for further analysis of thisserendipitous measurement of the zodiacal background.Based on observations with the Infrared Space Observatory (ISO), aEuropean Space Agency (ESA) project with instruments funded by ESAMember States (especially the Principle Investigator countries: France,Germany, Netherlands, and United Kingdom) and with the participation ofthe Institute of Space and Astronautical Science (ISAS) and the NationalAeronautics and Space Administration (NASA). Dusty Debris Disks as Signposts of Planets: Implications for Spitzer Space TelescopeSubmillimeter and near-infrared images of cool dusty debris disks andrings suggest the existence of unseen planets. At dusty but nonimagedstars, semimajor axes of associated planets can be estimated from thedust temperature. For some young stars these semimajor axes are greaterthan 1" as seen from Earth. Such stars are excellent targets forsensitive near-infrared imaging searches for warm planets. To probe thefull extent of the dust and hence of potential planetary orbits, Spitzerobservations should include measurements with the 160 μm filter. Are Giant Planets Forming around HR 4796A?We have obtained Far Ultraviolet Spectroscopic Explorer and Hubble SpaceTelescope STIS spectra of HR 4796A, a nearby 8 Myr old main-sequencestar that possesses a dusty circumstellar disk whose inclination hasbeen constrained from high-resolution near-infrared observations to be~17° from edge-on. We searched for circumstellar absorption in theground states of C II λ1036.3, O I λ1039.2, Zn IIλ2026.1, Lyman series H2, and CO (A-X) and failed todetect any of these species. We place upper limits on the columndensities and infer upper limits on the gas masses assuming that the gasis in hydrostatic equilibrium, is well mixed, and has a temperatureTgas~65 K. Our measurements suggest that this systempossesses very little molecular gas. Therefore, we infer an upper limitfor the gas-to-dust ratio (<=4.0) assuming that the gas is atomic. Wemeasure less gas in this system than is required to form the envelope ofJupiter. Polarimetric Studies of Stars with an Infrared Emission ExcessThe results of polarimetric and IR (IRAS) observations of 24 B-A-F starsare given. Intrinsic polarization of the light from 11 of the 24 starsis observed. The degree of polarization for the other 13 stars is withinthe measurement errors. Two-color diagrams are also constructed. From acomparison of the degree of polarization with the color index on thetwo-color diagrams it is seen that 8 of these 13 stars probably are ofthe Vega type, while 5 are stars with gas—dust shells and/ordisk—shells. It is shown that 6 of the aforementioned 11 starswith intrinsic polarization evidently are stars with gas—dustshells and/or disk—shells, while 5 of them (also including No. 24)are of the Vega type. It is also shown that the IR emission from 10 ofthe stars corresponds to a power-law distribution F . This fact may beexplained both by free—free transitions of electrons and bythermal emission from dust grains in circumstellar gas—dust shells(disks). Astrochemistry : The molecular universeHelen J Fraser, Martin R S McCoustra and David A Williams present asimple guide to astrochemistry. Molecules play a fundamental role inmany regions of our universe. The science where chemistry and astronomyoverlap is known as astrochemistry, a branch of astronomy that has risenin importance over recent years. In this article we review thesignificance of chemistry in several astronomical years. IN this articlewe review the significance of chemistry in several astronomicalenvironments including the early universe, interstellar clouds,starforming regions and protoplanetary disks. We discuss theoreticalmodels, laboratory experiments and observational data, and presentseveral recent and exciting results that challenge our perception of themolecular universe''. Rotational velocities of A-type stars in the northern hemisphere. II. Measurement of v sin iThis work is the second part of the set of measurements of v sin i forA-type stars, begun by Royer et al. (\cite{Ror_02a}). Spectra of 249 B8to F2-type stars brighter than V=7 have been collected at Observatoirede Haute-Provence (OHP). Fourier transforms of several line profiles inthe range 4200-4600 Å are used to derive v sin i from thefrequency of the first zero. Statistical analysis of the sampleindicates that measurement error mainly depends on v sin i and thisrelative error of the rotational velocity is found to be about 5% onaverage. The systematic shift with respect to standard values fromSlettebak et al. (\cite{Slk_75}), previously found in the first paper,is here confirmed. Comparisons with data from the literature agree withour findings: v sin i values from Slettebak et al. are underestimatedand the relation between both scales follows a linear law ensuremath vsin inew = 1.03 v sin iold+7.7. Finally, thesedata are combined with those from the previous paper (Royer et al.\cite{Ror_02a}), together with the catalogue of Abt & Morrell(\cite{AbtMol95}). The resulting sample includes some 2150 stars withhomogenized rotational velocities. Based on observations made atObservatoire de Haute Provence (CNRS), France. Tables \ref{results} and\ref{merging} are only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/897 Rotational velocities of A-type stars. I. Measurement of v sin i in the southern hemisphereWithin the scope of a Key Programme determining fundamental parametersof stars observed by HIPPARCOS, spectra of 525 B8 to F2-type starsbrighter than V=8 have been collected at ESO. Fourier transforms ofseveral line profiles in the range 4200-4500 Å are used to derivev sin i from the frequency of the first zero. Statistical analysis ofthe sample indicates that measurement error is a function of v sin i andthis relative error of the rotational velocity is found to be about 6%on average. The results obtained are compared with data from theliterature. There is a systematic shift from standard values from\citet{Slk_75}, which are 10 to 12% lower than our findings. Comparisonswith other independent v sin i values tend to prove that those fromSlettebak et al. are underestimated. This effect is attributed to thepresence of binaries in the standard sample of Slettebak et al., and tothe model atmosphere they used. Based on observations made at theEuropean Southern Observatory (ESO), La Silla, Chile, in the frameworkof the Key Programme 5-004-43K. Table 4 is only available in electronicform at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.125.5)or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/381/105 Substantial reservoirs of molecular hydrogen in the debris disks around young starsCircumstellar accretion disks transfer matter from molecular clouds toyoung stars and to the sites of planet formation. The disks observedaround pre-main-sequence stars have properties consistent with thoseexpected for the pre-solar nebula from which our own Solar System formed4.5Gyr ago. But the debris' disks that encircle more than 15% of nearbymain-sequence stars appear to have very small amounts of gas, based onobservations of the tracer molecule carbon monoxide: these observationshave yielded gas/dust ratios much less than 0.1, whereas theinterstellar value is about 100 (ref. 9). Here we report observations ofthe lowest rotational transitions of molecular hydrogen (H2)that reveal large quantities of gas in the debris disks around the starsβ Pictoris, 49 Ceti and HD135344. The gas masses calculated fromthe data are several hundreds to a thousand times greater than thoseestimated from the CO observations, and yield gas/dust ratios of thesame order as the interstellar value. Dusty Circumstellar DisksDusty circumstellar disks in orbit around main-sequence stars werediscovered in 1983 by the infrared astronomical satellite. It was thefirst time material that was not another star had been seen in orbitaround a main-sequence star other than our Sun. Since that time,analyses of data from the infrared astronomical satellite, the infraredspace observatory, and ground-based telescopes have enabled astronomersto paint a picture of dusty disks around numerous main-sequence andpost-main-sequence stars. This review describes, primarily in anevolutionary framework, the properties of some dusty disks orbiting,first, pre-main-sequence stars, then main-sequence andpost-main-sequence stars, and ending with white dwarfs. H2 and CO Emission from Disks around T Tauri and Herbig Ae Pre-Main-Sequence Stars and from Debris Disks around Young Stars: Warm and Cold Circumstellar GasWe present ISO Short-Wavelength Spectrometer observations ofH2 pure-rotational line emission from the disks around low-and intermediate-mass pre-main-sequence stars as well as from youngstars thought to be surrounded by debris disks. The pre-main-sequencesources have been selected to be isolated from molecular clouds and tohave circumstellar disks revealed by millimeter interferometry. Wedetect warm'' (T~100-200 K) H2 gas around many sources,including tentatively the debris-disk objects. The mass of this warm gasranges from ~10-4 Msolar up to8×10-3 Msolar and can constitute anonnegligible fraction of the total disk mass. Complementary single-dish12CO 3-2, 13CO 3-2, and 12CO 6-5observations have been obtained as well. These transitions probe coolergas at T~20-80 K. Most objects show a double-peaked CO emission profilecharacteristic of a disk in Keplerian rotation, consistent withinterferometer data on the lower J lines. The ratios of the12CO 3-2/13CO 3-2 integrated fluxes indicate that12CO 3-2 is optically thick but that 13CO 3-2 isoptically thin or at most moderately thick. The 13CO 3-2lines have been used to estimate the cold gas mass. If aH2/CO conversion factor of 1×104 is adopted,the derived cold gas masses are factors of 10-200 lower than thosededuced from 1.3 millimeter dust emission assuming a gas/dust ratio of100, in accordance with previous studies. These findings confirm that COis not a good tracer of the total gas content in disks since it can bephotodissociated in the outer layers and frozen onto grains in the colddense part of disks, but that it is a robust tracer of the disk velocityfield. In contrast, H2 can shield itself fromphotodissociation even in low-mass `optically thin'' debris disks andcan therefore survive longer. The warm gas is typically 1%-10% of thetotal mass deduced from millimeter continuum emission, but it canincrease up to 100% or more for the debris-disk objects. Thus, residualmolecular gas may persist into the debris-disk phase. No significantevolution in the H2, CO, or dust masses is found for starswith ages in the range of 106-107 yr, although adecrease is found for the older debris-disk star β Pictoris. Thelarge amount of warm gas derived from H2 raises the questionof the heating mechanism(s). Radiation from the central star as well asthe general interstellar radiation field heat an extended surface layerof the disk, but existing models fail to explain the amount of warm gasquantitatively. The existence of a gap in the disk can increase the areaof material influenced by radiation. Prospects for future observationswith ground- and space-borne observations are discussed. Based in parton observations with ISO, an ESA project with instruments funded by ESAmember states (especially the PI countries: France, Germany,Netherlands, and the United Kingdom) and with participation of ISAS andNASA. Gas—Dust Shells around Some Early-Type Stars with an IR Excess (of Emission)The results of an investigation of IR (IRAS) observations of 58O—B—A—F stars of different luminosity classes, whichare mainly members of various associations, are presented. The colorindices of these stars are determined and two-color diagrams areconstructed. The emission excesses at 12 and 25 mm (E 12 and E 25) arealso compared with the absorption A1640 of UV radiation. It is concludedthat 24 stars (of the 58 investigated) are disk systems of the Vegatype, to which Vega = N 53 also belongs. Eight known stars of the Vegatype are also given in the figures for comparison. The remaining 34stars may have gas—dust shells and/or shell—disks. The IRemission excesses of the 34 investigated stars and 11 comparison stars(eight of them are Be-Ae stars) are evidently due both to thermalemission from grains and to the emission from free—freetransitions of electrons in the gas—dust shells of these stars. Mid-Infrared Imaging of Candidate Vega-like SystemsWe have conducted deep mid-infrared imaging of a relatively nearbysample of candidate Vega-like stars using the OSCIR instrument on theCerro Tololo Inter-American Observatory 4 m and Keck II 10 m telescopes.Our discovery of a spatially resolved disk around HR 4796A has alreadybeen reported in 1998 by Jayawardhana et al. Here we present imagingobservations of the other members of the sample, including the discoverythat only the primary in the HD 35187 binary system appears to harbor asubstantial circumstellar disk and the possible detection of extendeddisk emission around 49 Ceti. We derive global properties of the dustdisks, place constraints on their sizes, and discuss several interestingcases in detail. Although our targets are believed to be main-sequencestars, we note that several have large infrared excesses compared withprototype Vega-like systems and may therefore be somewhat younger. Thedisk size constraints we derive, in many cases, imply emission fromrelatively large (>~10 μm) particles at mid-infrared wavelengths. NICMOS Coronagraphic Observations of 55 CancriWe present new near-infrared (1.1 μm) observations of thecircumstellar environment of the planet-bearing star 55 Cancri. Withthese Hubble Space Telescope (HST) images we are unable to confirm theobservation of bright scattered radiation at longer NIR wavelengthspreviously reported by Trilling and coworkers. NICMOS coronagraphicimages with detection sensitivities to ~100 μJy arcsec-2at 1.1 μm in the region 28-60 AU from the star fail to reveal anysignificant excess flux in point-spread function (PSF) subtracted imagestaken in two HST orbits. These new observations place flux densities inthe 19-28 AU zone at a factor of 10 or more below the reportedground-based observations. Applying a suite of a dozen well-matchedcoronagraphic reference PSFs, including one obtained in the same orbitsas the observations of 55 Cnc, yielded consistently null results indetecting a disk. We also searched for and failed to find a suggestedflux-excess anisotropy in the ratio of ~1.7:1 in the circumstellarbackground along and orthogonal to the plane of the putative disk. Wesuggest that, if such a disk does exist, then the total 1.1 μmspectral flux density in an annular zone 28-42 AU from the star must beno more than ~0.4 mJy, at least 10 times smaller than suggested byTrilling and Brown, upon which their very large estimate for the totaldust mass (0.4 M⊕) was based. Based on the far-infraredand submillimeter flux of this system and observations of scatteredlight and thermal emission from other debris disks, we also expect theintensity of the scattered light to be at least an order of magnitudebelow our upper limits. EXPORT: Optical photometry and polarimetry of Vega-type and pre-main sequence starsThis paper presents optical UBVRI broadband photo-polarimetry of theEXPORT sample obtained at the 2.5 m Nordic Optical Telescope. Thedatabase consists of multi-epoch photo-polarimetry of 68pre-main-sequence and main-sequence stars. An investigation of thepolarization variability indicates that 22 objects are variable at the3sigma level in our data. All these objects are pre-main sequence stars,consisting of both T Tauri and Herbig Ae/Be objects while the mainsequence, Vega type and post-T Tauri type objects are not variable. Thepolarization properties of the variable sources are mostly indicative ofthe UXOR-type behaviour; the objects show highest polarization when thebrightness is at minimum. We add seven new objects to the class of UXORvariables (BH Cep, VX Cas, DK Tau, HK Ori, LkHα 234, KK Oph and RYOri). The main reason for their discovery is the fact that our data-setis the largest in its kind, indicating that many more young UXOR-typepre-main sequence stars remain to be discovered. The set of Vega-likesystems has been investigated for the presence of intrinsicpolarization. As they lack variability, this was done using indirectmethods, and apart from the known case of BD+31o643, thefollowing stars were found to be strong candidates to exhibitpolarization due to the presence of circumstellar disks: 51 Oph,BD+31o643C, HD 58647 and HD 233517. Table A1 is onlyavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/379/564 EXPORT: Spectral classification and projected rotational velocities of Vega-type and pre-main sequence starsIn this paper we present the first comprehensive results extracted fromthe spectroscopic campaigns carried out by the EXPORT (EXoPlanetaryObservational Research Team) consortium. During 1998-1999, EXPORTcarried out an intensive observational effort in the framework of theorigin and evolution of protoplanetary systems in order to obtain clueson the evolutionary path from the early stages of the pre-main sequenceto stars with planets already formed. The spectral types of 70 stars,and the projected rotational velocities, v sin i, of 45 stars, mainlyVega-type and pre-main sequence, have been determined from intermediate-and high-resolution spectroscopy, respectively. The first part of thework is of fundamental importance in order to accurately place the starsin the HR diagram and determine the evolutionary sequences; the secondpart provides information on the kinematics and dynamics of the starsand the evolution of their angular momentum. The advantage of using thesame observational configuration and methodology for all the stars isthe homogeneity of the set of parameters obtained. Results from previouswork are revised, leading in some cases to completely new determinationsof spectral types and projected rotational velocities; for some stars noprevious studies were available. Tables 1 and 2 are only, and Table 6also, available in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/378/116 Based onobservations made with the Isaac Newton and the William Herscheltelescopes operated on the island of La Palma by the Isaac Newton Groupin the Spanish Observatorio del Roque de los Muchachos of the Institutode Astrofísica de Canarias. Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 EXPORT: Near-IR observations of Vega-type and pre-main sequence starsWe present near-IR JHK photometric data of a sample of 58 main-sequence,mainly Vega-type, and pre-main sequence stars. The data were takenduring four observing runs in the period May 1998 to January 1999 andform part of a coordinated effort with simultaneous optical spectroscopyand photo-polarimetry. The near-IR colors of the MS stars correspond inmost cases to photospheric colors, although noticeable reddening ispresent towards a few objects, and these stars show no brightnessvariability within the observational errors. On the other hand, the PMSstars show near-IR excesses and variability consistent with previousdata. Tables 1 and 2 are only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strastg.fr/cgi-bin/qcat?J/A+A/365/110 Optical, infrared and millimetre-wave properties of Vega-like systems - IV. Observations of a new sample of candidate Vega-like sourcesPhotometric observations at optical and near-infrared wavelengths arepresented for members of a new sample of candidate Vega-like systems, ormain sequence stars with excess infrared emission due to circumstellardust. The observations are combined with IRAS fluxes to define thespectral energy distributions of the sources. Most of the sources showonly photospheric emission at near-IR wavelengths, indicating a lack ofhot (~1000K) dust. Mid-infrared spectra are presented for four sourcesfrom the sample. One of them, HD 150193, shows strong silicate emission,while another, HD 176363, was not detected. The spectra of two starsfrom our previous sample of Vega-like sources both show UIR-bandemission, attributed to hydrocarbon materials. Detailed comparisons ofthe optical and IRAS positions suggest that in some cases the IRASsource is not physically associated with the visible star. Alternativeassociations are suggested for several of these sources. Fractionalexcess luminosities are derived from the observed spectral energydistributions. The values found are comparable to those measuredpreviously for other Vega-like sources. ISOPHOT Observations of Dust Disks around Main Sequence (Vega-Like) StarsThe photometer (ISOPHOT) on the Infrared Space Observatory (ISO) hasproved to be invaluable for investigating the dust around main sequencestars (both prototypes and candidate Vega-like stars). The longwavelength camera (at 60 and 90 μm) has been used to map the areaaround the stars to establish whether the dust disk is extended.Low-resolution spectra between 5.8 and 11.6 μm show whether the dustis composed of silicate grains, and whether molecular features arepresent. The four prototype Vega-like stars (Vega, β Pic,Fomalhaut, ɛ Eri) are studied, as well as eight other stars,which are main sequence stars with cool dust associated with them. Wefind that the spectra of β Pic, 49 Cet, HD98800, and HD135344 showexcess emission from the cool dust around the star, HD144432 andHD139614 show silicate dust emission, HD169142 and HD34700 show emissionfeatures from carbon-rich molecules (possibly PAHs, polycyclic aromatichydrocarbon molecules), and HD142666 shows emission features from bothcarbon-rich molecules and silicate dust. Up to 11.6 μm, the emissionfrom Vega, Fomalhaut, and ɛ Eri is dominated by the stellarphotosphere. At 60 and 90 μm, the extended dust emission is mapped,and the disk resolved in eight cases. The dust mass in the disks isfound to range from around 10-9 to 10-4Msolar. Since several of the stars are younger than the Sun,and the disks have sufficient material of the type found in the SolarSystem, these disks could be in the early stages of planet formation. A Single Circumbinary Disk in the HD 98800 Quadruple SystemWe present subarcsecond thermal infrared imaging of HD 98800, a youngquadruple system composed of a pair of low-mass spectroscopic binariesseparated by 0.8" (38 AU), each with a K-dwarf primary. Images atwavelengths ranging from 5 to 24.5 μm show unequivocally that theoptically fainter binary, HD 98800B, is the sole source of acomparatively large infrared excess on which a silicate emission featureis superposed. The excess is detected only at wavelengths of 7.9 μmand longer, peaks at 25 μm, and has a best-fit blackbody temperatureof 150 K, indicating that most of the dust lies at distances greaterthan the orbital separation of the spectroscopic binary. We estimate theradial extent of the dust with a disk model that approximates radiationfrom the spectroscopic binary as a single source of equivalentluminosity. Given the data, the most likely values of disk properties inthe ranges considered are Rin=5.0+/-2.5 AU, ΔR=13+/-8AU, λ0=2+4-1.5 μm,γ=0+/-2.5, and σtotal=16+/-3 AU2,where Rin is the inner radius, ΔR is the radial extentof the disk, λ0 is the effective grain size, γis the radial power-law exponent of the optical depth τ, andσtotal is the total cross section of the grains. Therange of implied disk masses is 0.001-0.1 times that of the Moon. Theseresults show that, for a wide range of possible disk properties, acircumbinary disk is far more likely than a narrow ring. Polarimetric observations of some stars with an infrared (emission) excess.Not Available Polarization measurements of Vega-like starsOptical linear polarization measurements are presented for about 30Vega-like stars. These are then compared with the polarization observedfor normal field stars. A significant fraction of the Vega-like starsare found to show polarization much in excess of that expected to be dueto interstellar matter along the line of sight to the star. The excesspolarization must be intrinsic to the star, caused by circumstellarscattering material that is distributed in a flattened disk. Acorrelation between infrared excess and optical polarization is foundfor the Vega-like stars. The Circumstellar Disk of HD 141569 Imaged with NICMOSCoronagraphic imaging with the Near-Infrared Camera and MultiobjectSpectrometer on the Hubble Space Telescope reveals a large, ~400 AU (4")radius, circumstellar disk around the Herbig Ae/Be star HD 141569. Areflected light image at 1.1 μm shows the disk oriented at a positionangle of 356^deg+/-5^deg and inclined to our line of sight by51^deg+/-3^deg the intrinsic scattering function of the dust in the diskmakes the side inclined toward us, the eastern side, brighter. The diskflux density peaks 185 AU (1.85") from the star and falls off to bothlarger and smaller radii. A region of depleted material, or a gap, inthe disk is centered 250 AU from the star. The dynamical effect of oneor more planets may be necessary to explain this morphology. 새 글 등록 ### 관련 링크 • - 링크가 없습니다. - 새 링크 등록 ### 다음 그룹에 속해있음: #### 관측 및 측정 데이터 별자리: 고래자리 적경: 01h34m37.80s 적위: -15°40'34.0" 가시등급: 5.63 거리: 61.275 파섹 적경상의 고유운동: 96 적위상의 고유운동: -2.5 B-T magnitude: 5.689 V-T magnitude: 5.615 천체목록: 일반명 (Edit) Flamsteed 49 Cet HD 1989 HD 9672 TYCHO-2 2000 TYC 5852-2148-1 USNO-A2.0 USNO-A2 0675-00563389 BSC 1991 HR 451 HIP HIP 7345 → VizieR에서 더 많은 목록을 가져옵니다.
Copied to clipboard ## G = C22×C3.He3order 324 = 22·34 ### Direct product of C22 and C3.He3 direct product, metabelian, nilpotent (class 3), monomial Series: Derived Chief Lower central Upper central Derived series C1 — C32 — C22×C3.He3 Chief series C1 — C3 — C32 — C3×C9 — C3.He3 — C2×C3.He3 — C22×C3.He3 Lower central C1 — C3 — C32 — C22×C3.He3 Upper central C1 — C2×C6 — C62 — C22×C3.He3 Generators and relations for C22×C3.He3 G = < a,b,c,d,e,f | a2=b2=c3=e3=1, d3=c-1, f3=c, ab=ba, ac=ca, ad=da, ae=ea, af=fa, bc=cb, bd=db, be=eb, bf=fb, cd=dc, ce=ec, cf=fc, de=ed, fdf-1=cde-1, fef-1=c-1e > Subgroups: 115 in 65 conjugacy classes, 40 normal (10 characteristic) C1, C2, C3, C3, C22, C6, C6, C9, C32, C2×C6, C2×C6, C18, C3×C6, C3×C9, 3- 1+2, C2×C18, C62, C3×C18, C2×3- 1+2, C3.He3, C6×C18, C22×3- 1+2, C2×C3.He3, C22×C3.He3 Quotients: C1, C2, C3, C22, C6, C32, C2×C6, C3×C6, He3, C62, C2×He3, C3.He3, C22×He3, C2×C3.He3, C22×C3.He3 Smallest permutation representation of C22×C3.He3 On 108 points Generators in S108 (1 59)(2 60)(3 61)(4 62)(5 63)(6 55)(7 56)(8 57)(9 58)(10 64)(11 65)(12 66)(13 67)(14 68)(15 69)(16 70)(17 71)(18 72)(19 73)(20 74)(21 75)(22 76)(23 77)(24 78)(25 79)(26 80)(27 81)(28 82)(29 83)(30 84)(31 85)(32 86)(33 87)(34 88)(35 89)(36 90)(37 91)(38 92)(39 93)(40 94)(41 95)(42 96)(43 97)(44 98)(45 99)(46 100)(47 101)(48 102)(49 103)(50 104)(51 105)(52 106)(53 107)(54 108) (1 32)(2 33)(3 34)(4 35)(5 36)(6 28)(7 29)(8 30)(9 31)(10 37)(11 38)(12 39)(13 40)(14 41)(15 42)(16 43)(17 44)(18 45)(19 46)(20 47)(21 48)(22 49)(23 50)(24 51)(25 52)(26 53)(27 54)(55 82)(56 83)(57 84)(58 85)(59 86)(60 87)(61 88)(62 89)(63 90)(64 91)(65 92)(66 93)(67 94)(68 95)(69 96)(70 97)(71 98)(72 99)(73 100)(74 101)(75 102)(76 103)(77 104)(78 105)(79 106)(80 107)(81 108) (1 7 4)(2 8 5)(3 9 6)(10 16 13)(11 17 14)(12 18 15)(19 25 22)(20 26 23)(21 27 24)(28 34 31)(29 35 32)(30 36 33)(37 43 40)(38 44 41)(39 45 42)(46 52 49)(47 53 50)(48 54 51)(55 61 58)(56 62 59)(57 63 60)(64 70 67)(65 71 68)(66 72 69)(73 79 76)(74 80 77)(75 81 78)(82 88 85)(83 89 86)(84 90 87)(91 97 94)(92 98 95)(93 99 96)(100 106 103)(101 107 104)(102 108 105) (1 2 3 4 5 6 7 8 9)(10 11 12 13 14 15 16 17 18)(19 20 21 22 23 24 25 26 27)(28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45)(46 47 48 49 50 51 52 53 54)(55 56 57 58 59 60 61 62 63)(64 65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80 81)(82 83 84 85 86 87 88 89 90)(91 92 93 94 95 96 97 98 99)(100 101 102 103 104 105 106 107 108) (10 16 13)(11 17 14)(12 18 15)(19 22 25)(20 23 26)(21 24 27)(37 43 40)(38 44 41)(39 45 42)(46 49 52)(47 50 53)(48 51 54)(64 70 67)(65 71 68)(66 72 69)(73 76 79)(74 77 80)(75 78 81)(91 97 94)(92 98 95)(93 99 96)(100 103 106)(101 104 107)(102 105 108) (1 26 11 7 23 17 4 20 14)(2 21 12 8 27 18 5 24 15)(3 25 13 9 22 10 6 19 16)(28 46 43 34 52 40 31 49 37)(29 50 44 35 47 41 32 53 38)(30 54 45 36 51 42 33 48 39)(55 73 70 61 79 67 58 76 64)(56 77 71 62 74 68 59 80 65)(57 81 72 63 78 69 60 75 66)(82 100 97 88 106 94 85 103 91)(83 104 98 89 101 95 86 107 92)(84 108 99 90 105 96 87 102 93) G:=sub<Sym(108)| (1,59)(2,60)(3,61)(4,62)(5,63)(6,55)(7,56)(8,57)(9,58)(10,64)(11,65)(12,66)(13,67)(14,68)(15,69)(16,70)(17,71)(18,72)(19,73)(20,74)(21,75)(22,76)(23,77)(24,78)(25,79)(26,80)(27,81)(28,82)(29,83)(30,84)(31,85)(32,86)(33,87)(34,88)(35,89)(36,90)(37,91)(38,92)(39,93)(40,94)(41,95)(42,96)(43,97)(44,98)(45,99)(46,100)(47,101)(48,102)(49,103)(50,104)(51,105)(52,106)(53,107)(54,108), (1,32)(2,33)(3,34)(4,35)(5,36)(6,28)(7,29)(8,30)(9,31)(10,37)(11,38)(12,39)(13,40)(14,41)(15,42)(16,43)(17,44)(18,45)(19,46)(20,47)(21,48)(22,49)(23,50)(24,51)(25,52)(26,53)(27,54)(55,82)(56,83)(57,84)(58,85)(59,86)(60,87)(61,88)(62,89)(63,90)(64,91)(65,92)(66,93)(67,94)(68,95)(69,96)(70,97)(71,98)(72,99)(73,100)(74,101)(75,102)(76,103)(77,104)(78,105)(79,106)(80,107)(81,108), (1,7,4)(2,8,5)(3,9,6)(10,16,13)(11,17,14)(12,18,15)(19,25,22)(20,26,23)(21,27,24)(28,34,31)(29,35,32)(30,36,33)(37,43,40)(38,44,41)(39,45,42)(46,52,49)(47,53,50)(48,54,51)(55,61,58)(56,62,59)(57,63,60)(64,70,67)(65,71,68)(66,72,69)(73,79,76)(74,80,77)(75,81,78)(82,88,85)(83,89,86)(84,90,87)(91,97,94)(92,98,95)(93,99,96)(100,106,103)(101,107,104)(102,108,105), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63)(64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81)(82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99)(100,101,102,103,104,105,106,107,108), (10,16,13)(11,17,14)(12,18,15)(19,22,25)(20,23,26)(21,24,27)(37,43,40)(38,44,41)(39,45,42)(46,49,52)(47,50,53)(48,51,54)(64,70,67)(65,71,68)(66,72,69)(73,76,79)(74,77,80)(75,78,81)(91,97,94)(92,98,95)(93,99,96)(100,103,106)(101,104,107)(102,105,108), (1,26,11,7,23,17,4,20,14)(2,21,12,8,27,18,5,24,15)(3,25,13,9,22,10,6,19,16)(28,46,43,34,52,40,31,49,37)(29,50,44,35,47,41,32,53,38)(30,54,45,36,51,42,33,48,39)(55,73,70,61,79,67,58,76,64)(56,77,71,62,74,68,59,80,65)(57,81,72,63,78,69,60,75,66)(82,100,97,88,106,94,85,103,91)(83,104,98,89,101,95,86,107,92)(84,108,99,90,105,96,87,102,93)>; G:=Group( (1,59)(2,60)(3,61)(4,62)(5,63)(6,55)(7,56)(8,57)(9,58)(10,64)(11,65)(12,66)(13,67)(14,68)(15,69)(16,70)(17,71)(18,72)(19,73)(20,74)(21,75)(22,76)(23,77)(24,78)(25,79)(26,80)(27,81)(28,82)(29,83)(30,84)(31,85)(32,86)(33,87)(34,88)(35,89)(36,90)(37,91)(38,92)(39,93)(40,94)(41,95)(42,96)(43,97)(44,98)(45,99)(46,100)(47,101)(48,102)(49,103)(50,104)(51,105)(52,106)(53,107)(54,108), (1,32)(2,33)(3,34)(4,35)(5,36)(6,28)(7,29)(8,30)(9,31)(10,37)(11,38)(12,39)(13,40)(14,41)(15,42)(16,43)(17,44)(18,45)(19,46)(20,47)(21,48)(22,49)(23,50)(24,51)(25,52)(26,53)(27,54)(55,82)(56,83)(57,84)(58,85)(59,86)(60,87)(61,88)(62,89)(63,90)(64,91)(65,92)(66,93)(67,94)(68,95)(69,96)(70,97)(71,98)(72,99)(73,100)(74,101)(75,102)(76,103)(77,104)(78,105)(79,106)(80,107)(81,108), (1,7,4)(2,8,5)(3,9,6)(10,16,13)(11,17,14)(12,18,15)(19,25,22)(20,26,23)(21,27,24)(28,34,31)(29,35,32)(30,36,33)(37,43,40)(38,44,41)(39,45,42)(46,52,49)(47,53,50)(48,54,51)(55,61,58)(56,62,59)(57,63,60)(64,70,67)(65,71,68)(66,72,69)(73,79,76)(74,80,77)(75,81,78)(82,88,85)(83,89,86)(84,90,87)(91,97,94)(92,98,95)(93,99,96)(100,106,103)(101,107,104)(102,108,105), (1,2,3,4,5,6,7,8,9)(10,11,12,13,14,15,16,17,18)(19,20,21,22,23,24,25,26,27)(28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45)(46,47,48,49,50,51,52,53,54)(55,56,57,58,59,60,61,62,63)(64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81)(82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99)(100,101,102,103,104,105,106,107,108), (10,16,13)(11,17,14)(12,18,15)(19,22,25)(20,23,26)(21,24,27)(37,43,40)(38,44,41)(39,45,42)(46,49,52)(47,50,53)(48,51,54)(64,70,67)(65,71,68)(66,72,69)(73,76,79)(74,77,80)(75,78,81)(91,97,94)(92,98,95)(93,99,96)(100,103,106)(101,104,107)(102,105,108), (1,26,11,7,23,17,4,20,14)(2,21,12,8,27,18,5,24,15)(3,25,13,9,22,10,6,19,16)(28,46,43,34,52,40,31,49,37)(29,50,44,35,47,41,32,53,38)(30,54,45,36,51,42,33,48,39)(55,73,70,61,79,67,58,76,64)(56,77,71,62,74,68,59,80,65)(57,81,72,63,78,69,60,75,66)(82,100,97,88,106,94,85,103,91)(83,104,98,89,101,95,86,107,92)(84,108,99,90,105,96,87,102,93) ); G=PermutationGroup([[(1,59),(2,60),(3,61),(4,62),(5,63),(6,55),(7,56),(8,57),(9,58),(10,64),(11,65),(12,66),(13,67),(14,68),(15,69),(16,70),(17,71),(18,72),(19,73),(20,74),(21,75),(22,76),(23,77),(24,78),(25,79),(26,80),(27,81),(28,82),(29,83),(30,84),(31,85),(32,86),(33,87),(34,88),(35,89),(36,90),(37,91),(38,92),(39,93),(40,94),(41,95),(42,96),(43,97),(44,98),(45,99),(46,100),(47,101),(48,102),(49,103),(50,104),(51,105),(52,106),(53,107),(54,108)], [(1,32),(2,33),(3,34),(4,35),(5,36),(6,28),(7,29),(8,30),(9,31),(10,37),(11,38),(12,39),(13,40),(14,41),(15,42),(16,43),(17,44),(18,45),(19,46),(20,47),(21,48),(22,49),(23,50),(24,51),(25,52),(26,53),(27,54),(55,82),(56,83),(57,84),(58,85),(59,86),(60,87),(61,88),(62,89),(63,90),(64,91),(65,92),(66,93),(67,94),(68,95),(69,96),(70,97),(71,98),(72,99),(73,100),(74,101),(75,102),(76,103),(77,104),(78,105),(79,106),(80,107),(81,108)], [(1,7,4),(2,8,5),(3,9,6),(10,16,13),(11,17,14),(12,18,15),(19,25,22),(20,26,23),(21,27,24),(28,34,31),(29,35,32),(30,36,33),(37,43,40),(38,44,41),(39,45,42),(46,52,49),(47,53,50),(48,54,51),(55,61,58),(56,62,59),(57,63,60),(64,70,67),(65,71,68),(66,72,69),(73,79,76),(74,80,77),(75,81,78),(82,88,85),(83,89,86),(84,90,87),(91,97,94),(92,98,95),(93,99,96),(100,106,103),(101,107,104),(102,108,105)], [(1,2,3,4,5,6,7,8,9),(10,11,12,13,14,15,16,17,18),(19,20,21,22,23,24,25,26,27),(28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45),(46,47,48,49,50,51,52,53,54),(55,56,57,58,59,60,61,62,63),(64,65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80,81),(82,83,84,85,86,87,88,89,90),(91,92,93,94,95,96,97,98,99),(100,101,102,103,104,105,106,107,108)], [(10,16,13),(11,17,14),(12,18,15),(19,22,25),(20,23,26),(21,24,27),(37,43,40),(38,44,41),(39,45,42),(46,49,52),(47,50,53),(48,51,54),(64,70,67),(65,71,68),(66,72,69),(73,76,79),(74,77,80),(75,78,81),(91,97,94),(92,98,95),(93,99,96),(100,103,106),(101,104,107),(102,105,108)], [(1,26,11,7,23,17,4,20,14),(2,21,12,8,27,18,5,24,15),(3,25,13,9,22,10,6,19,16),(28,46,43,34,52,40,31,49,37),(29,50,44,35,47,41,32,53,38),(30,54,45,36,51,42,33,48,39),(55,73,70,61,79,67,58,76,64),(56,77,71,62,74,68,59,80,65),(57,81,72,63,78,69,60,75,66),(82,100,97,88,106,94,85,103,91),(83,104,98,89,101,95,86,107,92),(84,108,99,90,105,96,87,102,93)]]) 68 conjugacy classes class 1 2A 2B 2C 3A 3B 3C 3D 6A ··· 6F 6G ··· 6L 9A ··· 9F 9G ··· 9L 18A ··· 18R 18S ··· 18AJ order 1 2 2 2 3 3 3 3 6 ··· 6 6 ··· 6 9 ··· 9 9 ··· 9 18 ··· 18 18 ··· 18 size 1 1 1 1 1 1 3 3 1 ··· 1 3 ··· 3 3 ··· 3 9 ··· 9 3 ··· 3 9 ··· 9 68 irreducible representations dim 1 1 1 1 1 1 3 3 3 3 type + + image C1 C2 C3 C3 C6 C6 He3 C2×He3 C3.He3 C2×C3.He3 kernel C22×C3.He3 C2×C3.He3 C6×C18 C22×3- 1+2 C3×C18 C2×3- 1+2 C2×C6 C6 C22 C2 # reps 1 3 2 6 6 18 2 6 6 18 Matrix representation of C22×C3.He3 in GL4(𝔽19) generated by 1 0 0 0 0 18 0 0 0 0 18 0 0 0 0 18 , 18 0 0 0 0 18 0 0 0 0 18 0 0 0 0 18 , 1 0 0 0 0 11 0 0 0 0 11 0 0 0 0 11 , 7 0 0 0 0 9 0 0 0 0 9 0 0 13 15 4 , 1 0 0 0 0 1 0 0 0 0 11 0 0 11 12 7 , 7 0 0 0 0 0 1 0 0 12 8 10 0 1 0 11 G:=sub<GL(4,GF(19))| [1,0,0,0,0,18,0,0,0,0,18,0,0,0,0,18],[18,0,0,0,0,18,0,0,0,0,18,0,0,0,0,18],[1,0,0,0,0,11,0,0,0,0,11,0,0,0,0,11],[7,0,0,0,0,9,0,13,0,0,9,15,0,0,0,4],[1,0,0,0,0,1,0,11,0,0,11,12,0,0,0,7],[7,0,0,0,0,0,12,1,0,1,8,0,0,0,10,11] >; C22×C3.He3 in GAP, Magma, Sage, TeX C_2^2\times C_3.{\rm He}_3 % in TeX G:=Group("C2^2xC3.He3"); // GroupNames label G:=SmallGroup(324,89); // by ID G=gap.SmallGroup(324,89); # by ID G:=PCGroup([6,-2,-2,-3,-3,-3,-3,500,303,453,1096]); // Polycyclic G:=Group<a,b,c,d,e,f|a^2=b^2=c^3=e^3=1,d^3=c^-1,f^3=c,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,a*f=f*a,b*c=c*b,b*d=d*b,b*e=e*b,b*f=f*b,c*d=d*c,c*e=e*c,c*f=f*c,d*e=e*d,f*d*f^-1=c*d*e^-1,f*e*f^-1=c^-1*e>; // generators/relations ׿ × 𝔽
Volcano Algebra Level 2 Does the equation $$x^{n} + x^{n -1} + ... + x = 1$$ have an only real positive root for each $$n\ge 1$$ (n belonging to set of natural numbers)? ×
# Break-even Point Equation Method Break-even is the point of zero loss or profit. At break-even point, the revenues of the business are equal its total costs and its contribution margin equals its total fixed costs. Break-even point can be calculated by equation method, contribution method or graphical method. The equation method is based on the cost-volume-profit (CVP) formula: px = vx + FC + Profit Where, p is the price per unit, x is the number of units, v is variable cost per unit and FC is total fixed cost. ## Calculation ### BEP in Sales Units At break-even point the profit is zero therefore the CVP formula is simplified to: px = vx + FC Solving the above equation for x which equals break-even point in sales units, we get: Break-even Sales Units = x = FC p − v ### BEP in Sales Dollars Break-even point in number of sales dollars is calculated using the following formula: Break-even Sales Dollars = Price per Unit × Break-even Sales Units ## Example Calculate break-even point in sales units and sales dollars from following information: Price per Unit $15 Variable Cost per Unit$7 Total Fixed Cost $9,000 Solution We have, p =$15 v = $7, and FC =$9,000 Substituting the known values into the formula for breakeven point in sales units, we get: Breakeven Point in Sales Units (x) = 9,000 ÷ (15 − 7) = 9,000 ÷ 8 = 1,125 units Break-even Point in Sales Dollars = $15 × 1,125 =$16,875 Written by Irfanullah Jan
# Landscape of supersymmetry breaking vacua in geometrically realized gauge theories @article{Ooguri2006LandscapeOS, title={Landscape of supersymmetry breaking vacua in geometrically realized gauge theories}, author={Hirosi Ooguri and Yutaka Ookouchi}, journal={Nuclear Physics}, year={2006}, volume={755}, pages={239-253} } • Published 8 June 2006 • Physics • Nuclear Physics • Physics • 2006 We study a new mechanism to dynamically break supersymmetry in the E8 × E8 heterotic string. As discussed recently in the literature, a long-lived, meta-stable non-supersymmetric vacuum can be • Physics • 2013 : It has been shown that four dimensional N = 2 gauge theories, softly broken to N = 1 by a superpotential term, can accommodate metastable non-supersymmetric vacua in their moduli space. We study Starting from an $\mathcal{N}=1$ supersymmetric electric gauge theory with the gauge group Sp(Nc) × SO(2N′c) with fundamentals for the first gauge group factor and a bifundamental, we apply Seiberg Starting from an supersymmetric electric gauge theory with the gauge group with fundamentals for each gauge group, the bifundamentals, a symmetric flavor and a conjugate symmetric flavor for SU(Nc), Starting from an ${\mathcal N}=1$ supersymmetric electric gauge theory with the multiple product gauge group and the bifundamentals, we apply Seiberg dual to each gauge group, obtain the \${\mathcal • Mathematics • 2008 We geometrically engineer Script N = 2 theories perturbed by a superpotential by adding 3-form flux with support at infinity to local Calabi-Yau geometries in type IIB. This allows us to apply the • Physics • 2007 We revisit the issue of moduli stabilization in a class of = 1 four dimensional supergravity theories which are low energy descriptions of standard perturbative heterotic string vacua compactified on ## References SHOWING 1-10 OF 25 REFERENCES • Physics • 2006 We study the existence of long-lived meta-stable supersymmetry breaking vacua in gauge theories with massless quarks, upon the addition of extra massive flavors. A simple realization is provided by a • Physics • 1997 It has been known that the fivebrane of type IIA theory can be used to give an exact low energy description of N=2 supersymmetric gauge theories in four dimensions. We follow the recent M theory • Mathematics • 2003 We outline the construction of metastable de Sitter vacua of type IIB string theory. Our starting point is highly warped IIB compactifications with nontrivial NS and RR three-form fluxes. By • Mathematics Physical review. D, Particles and fields • 1996 The main contribution of the present work is that it is given a careful and self-contained treatment of limit points and singularities to prove that in the absence of a superpotential the classical moduli space is the algebraic variety described by the set of all holomorphic gauge-invariant polynomials. • Mathematics • 2001 We construct N=1 supersymmetric theories on worldvolumes of D5 branes wrapped around 2-cycles of threefolds which are A-D-E fibrations over a plane. We propose large N duals as geometric transitions
# [texhax] white space problem Tom Schneider toms at ncifcrf.gov Wed Jul 15 10:27:14 CEST 2009 Donald: > And that's fine too! Hmmm, I see now it is an author-year system (I > just looked it up) whereas cite.sty is mainly intended for numeric > citations. That isn't specifically a *problem* but you should probably > select [nosort,nocompress] options (\RequirePackageWithOptions), though > using another package as the basis might be better. \RequirePackageWithOptions[nosort,nocompress]{cite} crashed: ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H <return> for immediate help. ... l.15 \RequirePackageWithOptions[n osort,nocompress]{cite} \RequirePackage{cite} works. > It is clear > that \citepunct should be \renewcommand\citepunct{;\ } to allow > line breaks in the best place. Ok. > I had assumed cell.sty was a more complete template for a journal layout, > rather than just the cite format. Even the bibliography definition is > commented out! I guess it was taken over by the change to \@biblabel > (great!) Yes, I simplified it. > I looked at the author instructions for Cell, and they don't really say > much about the citation format, just a bit about the bibliography > (references) format (need 10 authors to use et al!). Lots about copies > of consent forms! Ugh, got to do that tonight ... > Oh! Back to biblabel... what you have now will indent the bibliography, > so it is probably better to use: > > \def\@biblabel#1{\hspace{-\labelsep}} Worked like a charm!!! I was wondering about that ... New version is up: http://www.ccrnp.ncifcrf.gov/~toms/ftp/cell.sty Thanks for the help!! Tom Dr. Thomas D. Schneider National Institutes of Health schneidt at mail.nih.gov toms at alum.mit.edu (permanent) http://alum.mit.edu/www/toms
# Changes between Version 38 and Version 39 of DatabaseBasedAnalysis/Tables Ignore: Timestamp: Nov 7, 2018, 2:59:01 PM (7 months ago) Comment: -- ### Legend: Unmodified v38 === Threshold setting === To select data by threshold, the ideal values is usually fThresholdMinSet. It reflects the value obtained as minimum threshold during a run from the currents at its beginning. While some trigger patches could have a significantly higher threshold during a run (which affects the mean value over all patches) the MinSet value reflects usually the majority of the patch thresholds and should be similar (or in most cases identical) to the average (over all events) median (over the camera). To select data by threshold, the ideal values is usually fThresholdMinSet. It reflects the value obtained as minimum threshold during a run from the currents at its beginning. While some trigger patches could have a significantly higher threshold during a run (which affects the mean value over all patches) the !MinSet value reflects usually the majority of the patch thresholds and should be similar (or in most cases identical) to the average (over all events) median (over the camera).
### Moderately Hard Functions: Definition, Instantiations, and Applications Joël Alwen and Björn Tackmann ##### Abstract Several cryptographic schemes and applications are based on functions that are both reasonably efficient to compute and moderately hard to invert, including client puzzles for Denial-of-Service protection, password protection via salted hashes, or recent proof-of-work blockchain systems. Despite their wide use, a definition of this concept has not yet been distilled and formalized explicitly. Instead, either the applications are proven directly based on the assumptions underlying the function, or some property of the function is proven, but the security of the application is argued only informally. The goal of this work is to provide a (universal) definition that decouples the efforts of designing new moderately hard functions and of building protocols based on them, serving as an interface between the two. On a technical level, beyond the mentioned definitions, we instantiate the model for four different notions of hardness. We extend the work of Alwen and Serbinenko (STOC 2015) by providing a general tool for proving security for the first notion of memory-hard functions that allows for provably secure applications. The tool allows us to recover all of the graph-theoretic techniques developed for proving security under the older, non-composable, notion of security used by Alwen and Serbinenko. As an application of our definition of moderately hard functions, we prove the security of two different schemes for proofs of effort (PoE). We also formalize and instantiate the concept of a non-interactive proof of effort (niPoE), in which the proof is not bound to a particular communication context but rather any bit-string chosen by the prover. Available format(s) Category Foundations Publication info A minor revision of an IACR publication in TCC 2017 Keywords moderately hard functionsindifferentiabilityproof of effort Contact author(s) bta @ zurich ibm com History Short URL https://ia.cr/2017/945 CC BY BibTeX @misc{cryptoeprint:2017/945, author = {Joël Alwen and Björn Tackmann}, title = {Moderately Hard Functions: Definition, Instantiations, and Applications}, howpublished = {Cryptology ePrint Archive, Paper 2017/945}, year = {2017}, note = {\url{https://eprint.iacr.org/2017/945}}, url = {https://eprint.iacr.org/2017/945} } Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
Skip to main content Menu An Ode To The Rice Cooker, The Smartest Kitchen Appliance I’ve Ever Owned One of the first things I bought when I moved to college was a rice cooker. It was simple, the kind that costs 20 bucks at any old appliance or homeware store and has exactly two settings: on and off. Sure, it was hard to clean the gummy, burned rice off the bottom of the bowl in my dorm room’s thimble of a bathroom sink. And there was no eating the gelatinous, unevenly cooked mess that the machine tried to pass off as brown rice. But it served me well through two years of dorm living. A few years out of college, a cousin decided that it was time for me to officially cross over into adulthood. She graduated me to a new kind of rice cooker, one that to this day is the smartest thing in my kitchen. It serenades me when I turn it on with A through P of the alphabet song1 and croons an air called “Amaryllis,” by Louis XIII, when it’s done. But it’s the math this one runs on, not the adorable music, that makes it so special. The rice cooker of my adulthood is built on fuzzy logic, a field of computing that tries to make rational decisions in a world of imprecision. By mimicking our gray matter’s ability to reconcile gray information, this frivolous gadget has become one of the most essential items in my kitchen. Fuzzy logic is very different from the walls of 1s and 0s that are the foundation for so many computers and electronics. Those 1s and 0s are rooted in Boolean binary — an expression of Boolean logic, where every value or action is reduced to an answer of true or false. Fuzzy logic was an attempt to formalize a radically different approach, one that more closely resembles the human mind’s ability to find reason and rationality among incomplete information. Many important questions can’t be narrowed down to a yes or no; lots of things don’t fall into discrete categories. At the stroke of midnight on someone’s 18th birthday, is she an adult or a child? Is that periwinkle shirt blue? Where do you put a spork in your kitchen drawer — the spoon slot or the fork slot? Fuzzy logic was first proposed in 1965 by Lotfi Zadeh, a computer scientist who is now retired from the University of California, Berkeley. It was controversial for decades. As Zadeh wrote in the foreword to a 2013 edition of an academic journal dedicated to fuzzy logic, his use of words instead of numbers, as well as the attempt to incorporate imprecision, was heresy to many of his colleagues. “Almost all real-life applications of fuzzy logic involve the use of linguistic variables,” Zadeh wrote, adding, “In science, there is a deep-seated tradition of according much more respect for numbers than for words. In fact, scientific progress is commonly equated to progression from the use of words to the use of numbers.” When Zadeh attended conferences, one of his Berkeley colleagues, William Kahan, often shadowed him to give public rebuttals, recalled Heidar Malki, a professor of electrical and computer engineering at the University of Houston who specializes in fuzzy logic. One encounter between the men was chronicled in a 2002 journal series: “Fuzzy theory is wrong — wrong and pernicious,” Kahan said. “The danger of fuzzy theory is that it will encourage the sort of imprecise thinking that brought us to so much trouble.” Such thinking, he and other mathematicians lamented, didn’t require the rigor demanded by probability theory, the kind of logic we most often use here at FiveThirtyEight. That approach was seen as the only path to true knowledge. Indeed, the very word fuzzy often has a negative connotation in the U.S. (see fuzzy math in politics), and goes against Western notions of logic, which are mostly built around the Aristotelian law of the excluded middle: in lay terms, the idea that a statement cannot be true and false at the same time. Malki, however, says that fuzzy logic’s ability to incorporate gray into what was once a black and white world is what makes it so powerful. These gray areas, along with the use of language rather than just numbers, also explain why it is a foundation of artificial intelligence. “This is exactly how we as human beings think and make decisions,” Malki said. “If you ask someone how the temperature is, we don’t say 82.3 degrees. We say it’s warm.” So what does all of this have to do with consistently perfect rice? The Aristotle-inspired rice cooker I had in college would heat until the temperature of the rice rose above 212 degrees Fahrenheit, at which point all of the water would have been absorbed. As the temperature rose past this point, a magnet was activated by a thermostat and the machine would shut off. The appliance was either on or off, and it did but one thing while it was on. In my current fuzzy-logic cooker, however, I tell the machine what kind of rice I’m using and how long it has been soaking. It takes that information and decides what temperature it should reach, and for how long. Generally using what are essentially if/then statements, it can fine-tune the process. For example, it can take into account the surrounding air temperature and turn the heating element up or down to compensate. The rice isn’t cooked or uncooked; the fuzzy-logic machine wants it to be cooked correctly. Take brown rice, which is the same as white rice except it hasn’t had all of its bran layer and germ removed. In a magnetic cooker, you deal with the hard exterior by adding more water, which breaks down the outside but often leaves the rice mushy. In a fuzzy-logic cooker, the brown rice kernels are cooked at a lower temperature for a much longer period, which allows the rice to cook through without turning into a pulpy paste. I asked Marilyn Matsuba, a marketing manager for Zojirushi, a Japanese company known for its technologically advanced household products, why rice cookers are so much more intelligent than other kitchen devices. She pointed out that they are the natural child of two of Japan’s obsessions: rice and robots. Fuzzy logic is a subset of the artificial intelligence used in robots, and rice is so important in the diet that manufacturers are constantly looking for better ways to cook it. Compare that with some other popular kitchen gadgets. A handheld frother feels like a magic wand the way it whips billowy foam out of a cup of milk, but it’s really just a fast-moving whisk. A food processor can have attachments that do everything from kneading dough to shredding cheese, but the cook tells it when to turn on and off, and how fine to julienne the carrots. By contrast, innumerable gadgets and electronics outside the kitchen use fuzzy logic, among them the Sendai subway system in Japan, some fancy facial pattern recognition software, antilock brakes and air conditioners. “The funny thing is, the more research they do, the more they realize that the way they used to cook it with fire was the best way,” Matsuba told me. She says the more advanced technology is in many ways just better at mimicking a very old cooking technique that involves a vessel called a mushikamado, which looks something like an igloo with the top cut off. A rice pot was suspended inside, and then a lid was placed on the vessel, with a stone on top. Below the pot was fire. The cook would use high heat at first, then lower the temperature. Judging by the amount of steam, he would know when the rice was ready. There’s something wondrous about this seemingly simple device tapping into generations of cooking knowledge, using humanlike judgment skills to turn out an ever more perfect iteration of one of humanity’s staple foods. Fuzzy-logic rice cookers are a luxury (online they range in price from $50 to more than$700), but the awe mine inspires nearly matches the quality of the rice. Now if only someone would figure out how to get pasta perfectly al dente every time. ## Footnotes 1. Though there is a natural break in the song after P, the abrupt ending leaves me hanging. Perhaps the tune is better described as the first two lines of “Twinkle, Twinkle, Little Star,” which would appropriately leave us suspended in a state of wonder. Anna Maria Barry-Jester reports on public health, food and culture for FiveThirtyEight. Filed under , ,
# A,B decidable: proof that A\B is decidable too For an assignment I have to proof that for two given decidable languages A,B, A\B is decidable too. My idea is as follows: If B is empty or doesnt have elements in common with A, then A\B is decidable because A\B=A. If B is not empty and intersects with A, A\B is just a smaller subset of A and therefore decidable. I have a feeling that this is either the wrong idea or not formalized enough. I would really appreciate any hints regarding that. EDIT: A subset of a decidable language is NOT always decidable too, this was a misconception. • Why do you think a subset of a decidable language is still decidable? Nov 10 '18 at 11:01 • This is obviously a misconception, thanks for pointing that out. That leaves me with no useful approach to this assignment though. Can you point me in the right direction? Nov 10 '18 at 11:07 • You can try to make use of the fact that $B$ is decidable. Nov 10 '18 at 11:26 • I can't think of any way to utilize that. Can you elaborate? Nov 10 '18 at 11:33 Let $$M_A$$ and $$M_B$$ be deciders for $$A$$ and $$B$$ respectively. You can construct a TM $$M$$, $$M$$ = "On input x, $$\qquad$$ 1. Simulate $$M_A$$ on x. $$\qquad$$ 2. Reject if $$M_A$$ rejects. Else simulate $$M_B$$ on x. $$\qquad$$ 3. Accept if $$M_B$$ rejects. Else reject." $$Correctness$$: $$\qquad$$ $$M$$ always halts as $$M_A$$ and $$M_B$$ always halts. $$M$$ accepts if $$M_A$$ accepts and $$M_B$$ rejects i.e, $$x \in A$$ and $$x \notin B$$. $$M$$ rejects otherwise.
Dealing with astronomy data means I often need to deal with information collecting. Most of the time the API from the websites is powerful enough, I don’t need to make any extra effort to clean the data. Once in a while, I am not happy with learning all the grammar of the API so I decide to act like a barbarian and do it the hard way: scraping the web page and play with the regular expression (regex for short.) Alright, after a brain racking day with all the discussions, let’s have some different kind of fun – ice-cold bear on, let’s rock. In astronomy, most of the websites are friendly enough toward casual scrapers. Besides, since the data is mostly well structured/ formatted already, effort for cleaning the data is somewhere from non-existing to minimal. Plus, scraping is something once you are used to, it’s hard to get over with. Let me show one quick example of web page scraping. The task is the following: I need to find out the discovery date of those 294 supernova remnant (SNR) from this article. The task can be decomposed into a few steps: 1. check if it’s documented in this paper (duh..) 2. if not – indeed – find an online database that contains all the information 3. check the database and find the discovery date 4. automate I’m sure you can guess 1) is negative. Fortunately, the data is all stored in Simbad at this page. In principle, we can just export the data and call it a day. That said, I am not an expert of Simbad query grammar – even though it seems very powerful – nor am I in the mood of learning pages after pages of instructions. Plus, the discovery year is not explicitly documented in this table either. As a firm believer of minimal effort – let’s strategize. Immediately, it’s clear that the year of each supernova can be achieved by clicking on the column “#ref” to show the papers that cited this SNR. Taking SNR G310.6-00.3 as an example, it has 19 citations and click on the number leads to this page. Without any doubt, the first entry is the discovery paper. All we need to do is follow the link of each SNR, and find the earliest discovery paper. No sweat! The tool we are going to use is called scrapy. It’s a versatile scraper engine. All you need to do is to specify what website you want it to scrape, and what links you want it to follow. Plus it speaks python so seriously how can one not love it. You can find fantastic tutorial here. Given that the use-case is slightly different, I’m going to include step by step instructions here anyway. First you create a crawler engine ‘greencat’ by scrapy startproject greencat Then you write your spider script at ‘greencat/spiders/’, and I’m going to call it discovery_year.py. The debugging part takes up the main effort. This can be done with scrapy shell "http://simbad.u-strasbg.fr/simbad/sim-ref?bibcode=2019JApA...40...36G&simbo=on" where the first line is for the starting page, and second one is for the page of a given SNR. Another useful tool is the devoloper’s tool in the Chrome browser. This allows you to highlight css blocks. Once you spot the right block that contains your data, you can try in the scrapy shell something like response.css('div table tbody tr') which uses css selector – instead of XPath, the native selector used by scrapy – to select the block div that contains table that contains tbody (table body) that contains tr (table row). For more on selectors see here. Afterward, we read the data row by row, and choose the right column that contains the citation number. In the end, the spider looks like the following: import scrapy import re class DiscYearSpider(scrapy.Spider): name = "disc_year" def start_requests(self): urls = [ ] for url in urls: yield scrapy.Request(url=url, callback=self.parse) def parse(self, response): table = response.css("div")[6] # get the table rows = table.css('table tbody tr') for row in rows: target_block = row.css('td')[-2] # the one that contains the number of citations number_of_citations = target_block.css('a::text').get() # get the number # if links is not None: def parse_cite_page(self, response): m = re.search('Name=(.*)', response._url) name = m.group(1) rows = response.css('div table tbody tr') year_min = 9999 for row in rows: target = row.css('td')[0] doi = target.css('a::text').get() year = int(doi[0:4]) if year < year_min: year_min = year print('%s was discoverred at %d' %(name, year)) yield {name: year} Run it with: scrapy crawl disc_year -O discovery_year.json This generates discovery_year.json that looks like [ {"NAME%20Kes%2020A": 1972}, {"SNR%20G310.6-01.6": 2010}, {"SNR%20G310.6-00.3": 1972}, {"SNR%20G309.8%2b00.0": 1974}, {"SNR%20G311.5-00.3": 1974}, {"SNR%20G309.2-00.6": 1960}, {"SNR%20G308.3-01.4": 1996}, {"SNR%20G308.1-00.7": 1996}, {"SNR%20G306.3-00.9": 2011}, {"SNR%20G308.8-00.1": 1992}, {"SNR%20G301.4-01.0": 1996}, {"SNR%20G299.6-00.5": 1996}, {"SNR%20G302.3%2b00.7": 1974}, {"SNR%20G312.4-00.4": 1977}, {"SNR%20G113.0%2b00.2": 2005}, {"SNR%20G304.6%2b00.1": 1968}, {"SNR%20G116.5%2b01.1": 1979}, ... The beer is gone, and I should call it a day as well. As a bonus, this is the detection rate of SNR over the last few decades: and the total SNR as a function of time: (Cover image credit: NASA/CXC/MIT/UMass Amherst/M.D.Stage et al.)
Main There is currently an increased focus on the functional diversification of existing semiconductor process nodes to create new technologies1,2. This approach increases performance, supply chain resilience and security by reducing the number of chips from different process technologies that must be integrated and packaged into the final product. For technologies such as the fifth generation of wireless communications technology, such innovation is necessary to enable high-performance adaptable radios, where high-frequency resonators are a key element in both filters and voltage-controlled oscillators used. Recent developments in this area include thin-film bulk acoustic resonators with ferroelectric materials, as well as high-frequency Lamb- and Lamé-wave resonators using piezoelectric lithium niobate or aluminium nitride thin films3,4,5,6. To address band congestion, multiple-input–multiple-output arrays are becoming increasingly common as a way to multiplex transmissions between users in the same band, with large arrays allowing for increased performance across end users sharing the spectrum7. The beamforming enabled by such an antenna array is of particular use for higher-frequency bands, where propagation conditions are less stable due to increased radio-frequency (RF) absorption. These RF front ends require many more analogue components to be integrated in a compact form factor, driving research in RF integration into traditional digital processes. Many high-performance technologies require integration using microelectromechanical systems (MEMS)-first or MEMS-last process schemes8,9,10, which provide space and power savings compared with off-chip RF integration. The use of phononic-crystal (PnC) confinement, however, could allow resonator integration directly into a standard complementary metal–oxide–semiconductor (CMOS) process, potentially leading to larger multiple-input–multiple-output arrays in a smaller footprint at lower power11,12. In addition to reducing parasitics from routing high-frequency signals over long distances, this also provides the opportunity to use high-performance transistors for active transduction, leading to potentially enhanced performance. Such an approach also provides a route to future acoustic coupling between devices co-located on the same chip, which reduces the need for power-hungry phase-locked loops and allows scaled coupled oscillator systems for neuromorphic computing to be explored13,14,15,16. In this Article, we report CMOS fin field-effect transistor (FinFET)-based resonators that utilize acoustic waveguiding confinement operating in the Institute of Electrical and Electronics Engineers (IEEE) X-band of 8–12 GHz. Resonator cavity design The designed resonator structure, referred to as a fin resonant body transistor (fRBT) (Fig. 1), is fully integrated in the GlobalFoundries 14LPP commercial process and occupies a space of 9 μm × 15 μm (ref. 17). The device is a four-port network, differentially driven between adjacent gates along the fin length with the output of the resonator taken as the differential transconductance (equation (1)), which is a function of three main items: the electrical-to-acoustic transduction efficiency in the drive transistors, acoustic energy confinement and amplification in the resonance cavity, and acoustic-to-drain current conversion in the output transistors. The first of these items, namely, drive transduction, is electrostatic due to the 14 nm fabrication process used, although the potential exists for future piezoelectric drive using CMOS-compatible thin-film ferroelectrics4,18,19. The third item, namely, acoustic-to-drain current conversion, is also largely a function of the process and materials used in device fabrication. Although this conversion will be examined utilizing a simplified model, the initial device design is focused on maximizing the acoustic energy confinement by varying the gate length, termination scheme and back-end-of-line (BEOL) confinement conditions: $${g}_{\mathrm{mdd}}=\frac{{i}_{\mathrm{d}}}{{v}_{\mathrm{d}}}={Y}_{\mathrm{dd}21}-{Y}_{\mathrm{dd}12}.$$ (1) Figure 1a depicts a three-dimensional (3D) unit cell of the resonator with a 14-nm-wide FinFET at the front-end-of-line (FEOL) condition and periodically arranged metal blocks at the BEOL, categorized into Mx and Cx layers shown based on the thickness and minimum lateral feature sizes (Fig. 2a). The target resonant mode for the unreleased resonator is along the length direction of the fin transistors. To obtain a lower insertion loss and higher quality (Q) factor, the radiated acoustic energy needs to be concentrated at the metal–oxide–semiconductor capacitor drive transducers and sensing transistors. Compared with more traditional suspended resonators, this represents a challenge in the unreleased resonant body transistor design in commercial CMOS processes. Integrating a PnC at the BEOL of the standard CMOS process and leveraging the mechanical bandgaps of such structures has proven to be an effective tool to confine the energy and boost the resonance Q factor4,12. As shown by M1 in Fig. 3a, these periodic metals are uniform along the length (y direction) of the transistor gates (orthogonal to the fin direction) within the resonant cavity. To study and optimize the bandgap induced by the PnCs, several 3D simulations were performed in COMSOL Multiphysics20. Specifically, by mapping the real coordinates to reciprocal coordinates and searching for the eigenmodes along the first irreducible Brillouin zone in the 3D reciprocal lattice, a 130 × 60 nm2 PnC (such as one that can be designed within the constraints of the Mx layers in the 14 nm process) yields a partial bandgap at the operating point (X) from 9.0 to 14.0 GHz (complete from 11.9 to 12.5 GHz; Fig. 2d). For the longer 106-nm-gate-length device, the design rules are such that a 76 × 32 nm2 PnC can be utilized with two periods per unit cell. This smaller periodicity of PnC leads to complete bandgaps (Fig. 2b) from 8.7 to 9.9 GHz and from 12.3 to 14.6 GHz. At X, partial bandgaps exist from 8.2 to 12.0 GHz and from 12.0 to 14.6 GHz. To further confine the elastic energy, a separate PnC designed under the design rules of the Cx metal layer is also considered for a subset of devices. The corresponding dispersion relationship is shown in Fig. 2c. A complete mechanical bandgap is then obtained between 8.7 and 9.8 GHz (a partial one from 9.8 to 11.8 GHz at X), which overlaps with the bandgap induced by the Mx metal PnC. Together, these PnC designs indicate that elastic waves with a frequency of 9–14 GHz propagating along the fin can be prohibited from propagating into the BEOL, with a subset of that frequency range experiencing a complete bandgap. For this reason, the devices are designed for a waveguided mode with a differential drive that exhibits a large momentum (kx = π/a) at a frequency below the sound cone for the bulk silicon substrate, laterally confining the mode towards the driving/sensing transistors12. When the design rules do not allow fine-pitched PnCs, continuous metal plates may be used as acoustic Bragg reflectors, although their translational symmetry in two dimensions can only lead to partial bandgaps (Fig. 2e,f)21. Although the PnCs in the main resonant cavity enhance energy confinement and wave reflection from the BEOL, there also exists wave scattering at the two ends of the resonator. Reducing this scattering further increases the horizontal confinement, reduces the insertion loss, and hence increases the Q. The adiabatic theorem in photonic-waveguide design is leveraged here, which states that when the wave is propagated down the waveguide, scattering vanishes because of the limitation of sufficiently slow perturbation21. Therefore, the key to reduce this scattering is to introduce different units as terminations at the two ends for slower transitions. The unit cell for all the PnC structures in the major termination section throughout the paper is kept as 150 nm × 60 nm. Although all the measured devices have ten terminating gates at each end of the cavity, two types of cavity termination spacing are explored to provide acoustic confinement in the lateral direction, namely, abrupt and gradual termination schemes (Fig. 3b). Abrupt termination refers to the scenario where a terminating periodic array with a constant pitch shifted from that of the resonance cavity array is immediately adjacent to the main resonance region. In this case, the gate length of the termination immediately transitions to the Lterm value (Fig. 3d) and stays constant until the end of the device. As such, the guided waves are abruptly transitioned from the main cavity towards the ends. In the gradual termination scenario, the periodicity of the main resonant cavity adiabatically transitions from Lgate to the periodicity of the terminating array (Lterm), resulting in a termination gate length gradually transitioning from 80 to 140 nm. This serves to reduce scattering from the two ends and boost Q (ref. 12). FinFET active sensing To understand the conversion from stress to drain current in the sense transistors, the modulation of three separate FET properties is examined: oxide capacitance, channel mobility due to piezoresistivity, and silicon bandgap. For ease of visualization, an initial analysis is performed with the source-referenced simplified strong-inversion model with α = 1 (the traditional ‘square-law model’ or SPICE level 1)22: $${I}_{\mathrm{d0}}=\mu_0 {C}_{\mathrm{ox}0}\frac{W}{L}\left(({V}_{\mathrm{GS}}-{V}_{\mathrm{T}}){V}_{\mathrm{DS}}-\frac{{V}_{\mathrm{DS}}^{2}}{2}\right),$$ (2) where Id is the drain current; μ is the carrier mobility; Cox is the gate capacitance; VDS and VGS are the drain-to-source and gate-to-source bias, respectively; W and L are the channel width and length, respectively; and VT is the threshold voltage required for substantial channel conduction. This model has only a handful of parameters to describe the device operation, as opposed to the more advanced BSIM model commonly used in industry (Supplementary Information), which includes many empirically fit parameters to give the best device models for commercial applications23. Additional information on the methods to simulate fRBTs in BSIM using a commercial process design kit is provided elsewhere24. Capacitance modulation in these devices is the same mechanism used in the electrostatic drive of these devices, except that rather than directly sensing the gate current, the capacitance modulation is being amplified through transistor action into a change in drain current. The piezoresistive effect, where the semiconductor carrier mobility is modulated through stress in the transistor channel, similarly appears as a multiplier on the transistor drain current. Modulation of semiconductor bandgap, on the other hand, largely affects the threshold voltage of the transistor. Supplementary Section 1 provides a complete derivation of these effects. Modified to include the effects of stress on μ, Cox and VT (modelled as μ(σ) = μ0 + Δμ), the original expression for current in the linear region becomes the following: $$\begin{array}{lll}{I}_{\mathrm{d}}(\sigma )&=&\left({\mu }_{0}{C}_{\mathrm{ox}0} +{\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}} +{C}_{\mathrm{ox}0} {{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}} \right)\\ && \times \frac{W}{L}\left(({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0}){V}_{\mathrm{DS}}-\frac{{V}_{\mathrm{DS}}^{2}}{2}\right)\\ && -{{\Delta }}{V}_{\mathrm{T}}{V}_{\mathrm{DS}}\frac{W}{L}\left({\mu }_{0}{C}_{\mathrm{ox}0} \right. \\ && +\left. {\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}} +{C}_{\mathrm{ox}0} {{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}} \right),\end{array}$$ (3) where $$\begin{array}{lll}{{\Delta }}{I}_{\mathrm{d}}(\sigma )&=&({\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}} +{C}_{\mathrm{ox}0} {{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}} ) \\ && \times \frac{W}{L}\left(({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0}){V}_{\mathrm{DS}}-\frac{{V}_{\mathrm{DS}}^{2}}{2}\right)\\ &&-{{\Delta }}{V}_{\mathrm{T}}{V}_{\mathrm{DS}}\frac{W}{L}\left({\mu }_{0}{C}_{\mathrm{ox}0} +{\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}} \right. \\ && +\left. {C}_{\mathrm{ox}0} {{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}} \right).\end{array}$$ (4) Similarly, in saturation, $${I}_{\mathrm{d}0}=\mu_0 {C}_{\mathrm{ox}0}\frac{W}{2L}\left({({V}_{\mathrm{GS}}-{V}_{\mathrm{T}})}^{2}\left(1+\lambda [{V}_{\mathrm{DS}}-({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})]\right)\right),$$ (5) where λ represents the channel-length modulation coefficient (typically, 0.05 to 0.25) and $$\begin{array}{lll}{{\Delta }}{I}_{\mathrm{d}}&=&({\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}} +{C}_{\mathrm{ox}0} {{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}} ) \\ && \times \frac{W}{2L}\left({({V}_{\mathrm{GS}}-{V}_{\mathrm{T}})}^{2}(1+\lambda ({V}_{\mathrm{DS}}-{V}_{\mathrm{GS}}+{V}_{\mathrm{T}}))\right)\\ &&-\frac{W}{2L}({\mu }_{0}{C}_{\mathrm{ox}0}+{\mu }_{0}{{\Delta }}{C}_{\mathrm{ox}}+{C}_{\mathrm{ox}0}{{\Delta }}\mu +{{\Delta }}\mu {{\Delta }}{C}_{\mathrm{ox}})\\ && \left[\right.{{\Delta }}{V}_{\mathrm{T}}\left[\lambda {({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})}^{2}-2({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})\times \right.\\ && \left. \left(1+\lambda \left[{V}_{\mathrm{DS}}-({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})\right]\right)\right]\\ && +{{\Delta }}{V}_{\mathrm{T}}^{2}\left[\left(1+\lambda \left[{V}_{\mathrm{DS}}-({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})\right]\right)-2\lambda ({V}_{\mathrm{GS}}-{V}_{\mathrm{T}0})\right]\\ && + \left.{{\Delta }}{V}_{\mathrm{T}}^{3}[\lambda ]\right]\end{array}.$$ (6) For modern submicron devices, carriers do not exhibit constant mobility, but rather undergo velocity saturation at high fields. A simplified way to analytically model this is through a piecewise equation involving the saturation field (Esat) and lateral field (Ex) in the direction of carrier motion as $${v}_{\mathrm{d}}=\left\{\begin{array}{l}\frac{\mu | {E}_{x}| }{1+| {E}_{x}| /{E}_{\mathrm{sat}}}\,\,,\,\,| {E}_{x}| < {E}_{\mathrm{sat}}\quad \\ {v}_{\mathrm{sat}},\,\,| {E}_{x}| > {E}_{\mathrm{sat}}\quad \end{array}\right.,$$ (7) which, solved at vd = vsat, gives $${E}_{\mathrm{sat}}=\frac{2{v}_{\mathrm{sat}}}{\mu }$$ (8) and leads to a modified expression for drain current, namely, $${I}_{\mathrm{d}0}=\left\{\begin{array}{l}\mu_0 {C}_{\mathrm{ox}0}\frac{W}{L}\frac{1}{1+\frac{{V}_{\mathrm{DS}}}{{E}_{\mathrm{sat}}L}}\left(({V}_{\mathrm{GS}}-{V}_{\mathrm{T}}){V}_{\mathrm{DS}}-\frac{{V}_{\mathrm{DS}}^{2}}{2}\right)\,\,,\,\,{V}_{\mathrm{DS}} < {V}_{\mathrm{DSsat}}\quad \\ \quad \\ {C}_{\mathrm{ox}0}W{v}_{\mathrm{sat}0}\frac{{({V}_{\mathrm{GS}}-{V}_{\mathrm{T}})}^{2}}{({V}_{\mathrm{GS}}-{V}_{\mathrm{T}})+{E}_{\mathrm{sat}}(L-{{\Delta }}L)},\,\,{V}_{\mathrm{DS}} > {V}_{\mathrm{DSsat}}\quad \end{array}\right.,$$ (9) where $${V}_{\mathrm{DSsat}}=\frac{({V}_{\mathrm{GS}}-{V}_{\mathrm{T}}){E}_{\mathrm{sat}}L}{({V}_{\mathrm{GS}}-{V}_{\mathrm{T}})+{E}_{\mathrm{sat}}L}.$$ (10) An important nuance for acoustic resonators is that both saturation velocity and change in mobility (piezoresistive effect) are related to the change in carrier effective mass caused by the acoustic deformation of the crystal lattice—saturation velocity by m*−1/2 and mobility by a factor between m*−1/2 and m*−5/2 depending on doping (Supplementary Section 2 provides a detailed analysis). This relation means that although d.c. may saturate, the a.c. piezoresistive effect can still change the drain current to an extent, even in the saturation region. For this reason, a compound model is utilized that incorporates velocity saturation in the calculation of direct drain current but allows mobility to freely vary for a.c. calculations. Under this approach, the contribution to RF output from both electrostatic forces and mobility modulation should scale linearly with the drain current. The contribution from any threshold-voltage shift is seen to be related to drain bias only in the linear regime and is scaled by the impact of mobility and capacitance changes. In the saturation regime, on the other hand, the threshold-voltage shift is primarily a function of gate bias and is modified by drain bias only through channel-length modulation. These expressions for ΔId represent one half of the device transfer function gmdd (equation (1)), with the other half being given by the voltage-to-strain conversion and subsequent strain confinement/resonance. Although the direct measurement of strain in the solid-state resonator in FEOL poses a challenge, all three of these aspects are subsequently investigated using the measured electrical results. Analysis of design variations The d.c. results, shown as an example for the 80-nm-gate-length device (A1; Supplementary Section 6), indicate that the sense transistors in all the devices were working as intended. The large current values of approximately 2 mA when the gate and drain are biased at Vdd are due to the parallel d.c. connection of the two sense transistors and large effective width of the individual transistors, where 40 fins span the resonant cavity (in the y direction) and are electrically connected in parallel. Although the d.c. results were similar for all the devices (outside the expected differences due to gate-length variations), resonance-mode confinement was highly dependent on the design variations studied. A comparison of their differential transconductance spectra are shown in Fig. 4 along with the dispersion relations of the 80 and 106 nm devices with Mx PnC and Cx plates. From these data, it is evident that PnC confinement in the Mx layers results in a significantly improved Q factor and maximum transconductance. An offset in resonant frequencies between the simulated and experimental devices shows the challenge in simulating commercial CMOS devices, where a plethora of materials and complex geometries introduce opportunities for simulation error. Also visible is the role that the type of termination has in defining the centre frequency of nearby spurious modes in the device. This is not easily studied in simulation due to the computational intensity of simulating the full structure, highlighting the importance of experimental design splits in optimizing the performance. Analysis of variance (ANOVA) was used to quantify the impact of these variations in device design on the spectrum of acoustic modes. Such an analysis including several modes does not give an exact value for mode frequency, Q factor or amplitude, but rather compares the average value that might be expected based on the selected design parameters. This analysis indicates an increased average transconductance in gradual transition devices (352 nS) versus abruptly terminated ones (302 nS), with a corresponding increase in the average Q factor (32 versus 24, respectively). In addition to this enhancement, the average resonance centre frequency was shown to slightly decrease from 11.29 to 11.16 GHz, driven by the downward shift in the left-most peak of the high-Q cluster of modes seen in PnC-confined devices. The investigation of PnC confinement showed no significant change in the centre frequency of the modes, but it did show significantly improved average transconductance (204, 379 and 451 nS) and Q factor (20, 38 and 44, respectively) with increasing layers of PnC confinement. These trends are highlighted in Supplementary Fig. 10 along with an example of the peak-fitting results used for parameter extraction. A comparison between the 80- and 106-nm-gate-length devices showed increasing average mode frequency at shorter gate lengths (11.66 versus 10.71 GHz, respectively), as expected given that this alters the resonator unit-cell dimensions. The complete ANOVA results are provided in Supplementary Table 1, where the centre frequency, Q factor and height of the resonant peaks in gmdd are compared for the three design splits. As the device with the highest performance, resonator A2 was selected for further analysis, with the maximum transconductance of the 11.75 GHz mode examined as a function of device biasing. This was done at a drive bias of Vdd, as gmdd was seen to linearly increase with the increase in metal–oxide–semiconductor capacitance at higher drive bias (Supplementary Section 6). The dependence of gmdd on sense transistor biasing (Fig. 5) highlights that device transconductance, as expected, is modulated in a manner consistent with piezoresistive and bandgap modulation. Although the transistor parameters are of the correct order of magnitude in the reported data fit, they slightly vary from real-world values due to the simplified model used. For example, the square-law model does not accurately capture the subthreshold (accumulation and weak inversion) behaviour, and thus, the fit favours a slightly lower VT to more closely fit the bias currents near zero. This, in turn, leads to an overestimation of current at higher biases, which is compensated by a lower than expected value for mobility and a slightly larger equivalent oxide thickness. On the a.c. side, the fit is hindered by the presence of a noise floor in the S-parameter measurements that may obscure the lower peak levels. This, along with the poor near-threshold characteristics of the square-law model, contributes to the fitting error at lower values of gmdd. Future devices incorporating ferroelectric drive capacitors would mitigate this issue by greatly increasing the efficiency of electromechanical conversion25 and hence boosting the vibration amplitude and/or lowering power consumption. Moving away from the square-law model to a more advanced model such as that derived in another work26 that captures all the regions of operation may decrease this error. Similarly, adopting band-structure simulations of silicon as a result of acoustic deformation could provide a unified source of mobility and threshold-voltage changes in the transistor, at the expense of increased computational requirements. For n-type metal–oxide–semiconductor devices oriented in the <110> direction, the negative piezoresistive coefficients ensure that the contribution of threshold-voltage shift and mobility is additive. If fRBT devices are implemented with p-type metal–oxide–semiconductor devices or in processes with different fin orientations, care should be taken to ensure that the dominant stress-dependent effects do not compete with each other. This may not be as much of an issue for long-channel resonant body transistor devices, as piezoresistive modulation dominates for long-channel devices that are not constrained by the supply voltage. The highest-magnitude peak fit occurred in device A2 (Lgate = 80 nm, gradually terminated, dual PnC) at 11.73 GHz with a differential transconductance of 4.49 μS and Q = 69.8, for an f × Q product of 8.19 × 1011 at drain, gate and drive biases of 0.6, 0.8 and 0.8 V, respectively. Although previously demonstrated unreleased resonators in the 32 nm silicon-on-insulator (SOI) process have achieved an f × Q product of 3.8 × 1013 with phononic confinement at 3.3 GHz, these 14 nm devices exhibit 50 times higher transconductance and show an almost three times improvement in Q over a similar frequency-measured mode in 32 nm SOI devices27,28. Compared with some contemporary resonators in the X-band and adjacent Ku-band (12–18 GHz; Table 1), the unreleased devices in this work achieve moderate f × Q in an area smaller than comparable released devices. This, combined with the fact that the devices are integrated in the FEOL of a commercial CMOS technology node, opens the door for tight logic integration with minimum routing (and associated parasitics). Additional optimizations to the PnC and termination, although challenging due to process design rules, may further boost the Q factor of these devices. Furthermore, the incorporation of PnC confinement in longer gate lengths would enable this enhanced performance to be achieved across the IEEE X-band. Conclusions We have reported unreleased resonant fin transistors fabricated in standard 14 nm FinFET technology. An analytical model was developed that predicts resonator performance across sense transistor biasing conditions, using a combined velocity-saturated current at d.c. and unsaturated mobility variation at a.c. This highlights how the acoustic perturbation of carrier effective mass simultaneously modulates the saturation velocity and mobility. PnC confinement, which was experimentally shown to be the most important parameter in improving the unreleased resonator performance (an improvement on average of 2.2 times), provides a considerable opportunity for integrating acoustic devices into standard CMOS platforms. Our highest performing resonator—a gradually terminated 80-nm-gate-length device with both Mx and Cx PnC confinement—exhibits a transconductance amplitude of 4.49 μS and Q of 69.8, in the 11.73 GHz mode, resulting in an f × Q product of 8.2 × 1011. This technology—combined with advances in CMOS-compatible ferroelectric and piezoelectric thin films—could be used to develop compact, low-cost, CMOS-integrated and electrically controllable resonators that require no additional packaging. Characterization methods Device measurement An overview of the entire data pipeline is depicted in Supplementary Fig. 6. This process begins with S-parameter measurements taken at room temperature and pressure, with an input power of –10 dBm using an Agilent Technologies N5225a PNA instrument, with biasing provided by three Keithley 2400 source measure units (Supplementary Fig. 7). All the four devices were connected via General Purpose Interface Bus and synchronized via published Python3 scripts29,30. WinCal XE 4.7 software (FormFactor) was used to facilitate the pre-measurement calibration, and hybrid Line-Reflect-Reflect-Match-Short-Open-Load-Reciprocal calibration was performed using an impedance standard substrate (ISS 129-246). Infinity ground–signal–signal–ground probes were used to allow for a compact biasing scheme, saving die real estate at the expense of increased cross-talk between the adjacent signal lines. Gate bias for the sense transistors was provided via a separate d.c. needle probe. Measurement data post-processing The acquired Touchstone files were first analysed using scikit-rf, which was used to perform open de-embedding and transform the single-ended parameters collected into their mixed-mode equivalent matrices31. From these, gmdd was extracted to allow for the analysis of the designed differential mode. The magnitude of gmdd for each sample at a bias of Vdrain = 0.6 V, Vgate = 0.8 V and Vdrive = 0.8 V was modelled as a collection of Gaussian peaks and fit via Lmfit using the Levenberg–Marquardt algorithm with initial guesses for frequency and peak height provided via user input through matplotlib’s ginput method32. These preliminary fits were then used as the initial conditions for the remainder of the 450 collected measurements taken at drain biases of 0.2, 0.4 and 0.6 V; drive biases of 0, 0.2, 0.4, 0.6 and 0.8 V; and sense FET gate biases of 0 and 0.8 V, These were again fit via Lmfit and parallelized using Dask33. Due to the noisy nature of the data, with varying spectra between devices, a number of constraints were added to improve the fitting accuracy over all the design variations. The fitting was weighted by the gmdd amplitude to emphasize the primary peaks over smaller spurious modes and background noise. Variation in peak standard deviation between the biases was constrained to two times the original standard deviation, reflecting the fact that Q is a function of the physical mode confinement. The centre frequency for each sample was constrained to a maximum of 5% deviation as a function of bias to improve the fitting accuracy and reflect the fact that mode shape—and thus the frequency of operation—again is primarily dictated by device geometry and not biasing. Design variation analysis The extracted peak properties were analysed in SAS JMP Pro 14 statistical discovery software to investigate the relationship between the splits in device design and variation in performance as a function of device bias34. Before the analysis was carried out, high-error peak fits were eliminated (mean square error, >0.025; ≈5% of the data overall). The device splits (Fig. 2) are not entirely orthogonal due to a variety of constraints in device design related to the PnCs and gate lengths. Thus, one-way ANOVA was used on subsets of the experimental devices rather than trying to fit a multidimensional model to the entire sample set. To increase the number of statistical samples (and thus the chance of significance), all the bias combinations for each device were included in the analysis. For termination, this process is straightforward as there exists an abrupt and gradually terminated copy of every device, resulting in a comparison of all the devices. To investigate PnC confinement, the analysis was limited to designs with 80 nm gate length (A1–A4 and B0–B1) and the confinement treated as a three-level categorical factor. Compared with the other factors studied, Lgate is unique in that it is the primary design variable for selecting the centre frequency due to the mode being defined by source/drain spacing. This means that the different values studied dramatically change the cavity shape and thus all the peaks detected in the studied frequency span of 9.5–13.5 GHz cannot be used in the analysis, as they are not present in this range in all the devices. To overcome this, a much smaller comparison of A3 and A4 (Lgate = 80 nm) to A5 and A6 (Lgate = 106 nm) was performed using only the prominent modes with transconductance above 500 nS and Q above 25 that were present towards the centre of the spectra. The downside of this approach is that the gate length is confounded with a change in Mx PnC design that is necessary for the different device geometries due to challenges in design rules. Despite this, the fact that the confinement method showed no significant change in centre frequency for 80 nm devices points towards this shift being dominated by gate length, as expected. Transistor modelling The examination of expected device performance using the derived analytical model was performed in Python. The contribution for each of the three components (bandgap, mobility and capacitance modulation) was taken as the difference between the positive- and negative-stress components. Lmfit was used to perform the optimization routine, using a basin-hopping algorithm to find a global minimum for d.c. The fit of gmdd was performed using the Levenberg–Marquardt algorithm. The simplified one-dimensional model of a 3D transistor led to substantial cross-correlation between the stress directional components; however, the magnitude of the total stress remained relatively constant across the fit iterations.
The etale site of a closed subscheme and its etale Grothendieck subtopology There is a very basic theorem for the Zariski topology. Let X = Spec(R) and Y=Spec(R/I) for I some reduced ideal. Y obtains a topology two ways, one is the subspace topology as a subset of X and another as the spectrum of a ring. These topologies are the same by the correspondence between ideals of R containing I and ideals in R/I. Is there a close statement to this in the etale toplogy? There are two natural ways to understand open sets on Y, those which come from etale neighborhoods of X base changed to Y and those which are etale neighborhoods of Y. I did a computation today in a very special case and it seems that both of these topologies seem to be 'the same'. Does anyone know if this statement is true in a general context and where I might locate this resource? Thanks. - See EGA4, Proposition 18.1.1. –  Jonathan Wise Feb 28 '11 at 5:54 EGA IV Proposition 18.1.1 says that Zariski locally, any etale (resp. smooth) neighborhood of $Y$ is induced by an etale (resp. smooth) neighbohood of $X$. –  Anton Geraschenko Feb 28 '11 at 19:14 There is an improvement by Arabia of the EGA proposition in question. This answers questions of Monsky et Washnitzer. The article is called "Relèvements des algèbres lisses et de leurs morphismes" ems-ph.org/journals/… –  name Jul 7 '12 at 14:39 2 Answers The statement is at least true Zariski locally. That is, given an étale map $V\to Y$, there exists a Zariski open cover $X=\bigcup X_i$ so that the pullback of $V$ to $Y\cap X_i$ is the restriction of an étale neighborhood of $X_i$. To see this, use the structure theorem for étale morphisms: Theorem 34.11.3 in the chapter on étale morphisms in the Stacks Project. It says that any étale morphism to $Y=Spec(R/I)$ is Zariski locally an open subscheme $V$ of $Spec((R/I)[t]_{\bar f'}/(\bar f))$, where $\bar f\in (R/I)[t]$ is monic. Let $f\in R[t]$ be an arbitrary monic lift of $\bar f$. Then since the Zariski topoplogy of $Spec((R/I)[t]_{\bar f'}/(\bar f))$ is the restriction of the Zariski topology of $Spec(R[t]_{f'}/(f))$, there is some open subscheme $U$ of $Spec(R[t]_{f'}/(f))$ so that $V = U\cap Spec((R/I)[t]_{\bar f'}/(\bar f))$. This $U$ is étale over $X$ and pulls back to $V$. I'm implicitly replacing $X=Spec(R)$ and $Y=Spec(R/I)$ by localizations $Spec(R_g)$ and $Spec(R_g/I\cdot R_g)$ in the rest of the argument. The open cover $X=\bigcup X_i$ consists of these $Spec(R_g)$'s and the complement of $Y$. - Thanks. This proof was very close to my original proof. I felt that my extra assumptions had more to do with personal comfort than actual mathematics. –  Anonymous Feb 28 '11 at 5:06 Note that one can not expect a global version on $X$: take $X=\mathbb C^2$ and $Y: xy-1=0$ (which is isomorphic to $\mathbb C^*$). As $X$ is simply connected, the only connected étale cover of $Y$ you can extend to $X$ is the identity. –  Qing Liu Feb 28 '11 at 12:37 Here is one way of addressing your question. Let $X$ be a scheme, $Y$ a closed subscheme of $X$, and $U$ its open complement. Let $i: Y \rightarrow X$ and $j: U \rightarrow X$ denote the inclusion maps. Then the pushforward functor $i_{\ast}$ determines a fully faithful embedding from the category of etale sheaves on $Y$ to the category of etale sheaves on $X$. The essential image of this embedding is the full subcategory spanned by those etale sheaves $\mathcal{F}$ on $X$ such that $j^{\ast} \mathcal{F}$ is final (in other words, such that $\mathcal{F}(V)$ has a single element for every etale map $V \rightarrow X$ which factors through $U$). In topos-theoretic language, this says that $i$ induces a closed immersion of etale topoi, which is complementary to the open immersion of etale topoi determined by $j$. (The above discussion makes sense in an arbitrary topos, and for a topos of sheaves on a topological space it recovers the usual notion of closed embedding). I don't know a reference for the above statement offhand, but my guess would be that you can find a discussion in SGA somewhere. - Thanks for rephrasing it in this language. I'll browse SGA for statement like this tomorrow. –  Anonymous Feb 28 '11 at 5:08 Can the topoi really recover this kind of information about the original etale sites? It seems like knowing that $i$ is a closed immersion of etale topoi should prove that the canonical topology on the etale topos of $Y$ is induced by the canonical topology on the etale topos of $X$ (or something like that). –  Anton Geraschenko Feb 28 '11 at 5:21 The original question mentions that given a commutative ring R and an ideal I, two topological spaces (defined using the Zariski topology) are homeomorphic. My answer was addressing the question what is an analogous statement for the etale topology?'' (The answer being that you can extract two topoi which are equivalent.) I don't think you can formally deduce anything about defining sites from this. Rather the information would go in the other direction: to write down a proof of my claim, you'll want to think about the problem of lifting etale coverings, as in your earlier reply. –  Jacob Lurie Feb 28 '11 at 5:34 Thanks for clarifying. –  Anton Geraschenko Feb 28 '11 at 5:42
# allennlp.data.instance¶ class allennlp.data.instance.Instance(fields: MutableMapping[str, allennlp.data.fields.field.Field])[source] Bases: typing.Mapping An Instance is a collection of Field objects, specifying the inputs and outputs to some model. We don’t make a distinction between inputs and outputs here, though - all operations are done on all fields, and when we return arrays, we return them as dictionaries keyed by field name. A model can then decide which fields it wants to use as inputs as which as outputs. The Fields in an Instance can start out either indexed or un-indexed. During the data processing pipeline, all fields will be indexed, after which multiple instances can be combined into a Batch and then converted into padded arrays. Parameters fieldsDict[str, Field] The Field objects that will be used to produce data arrays for this instance. add_field(self, field_name:str, field:allennlp.data.fields.field.Field, vocab:allennlp.data.vocabulary.Vocabulary=None) → None[source] Add the field to the existing fields mapping. If we have already indexed the Instance, then we also index field, so it is necessary to supply the vocab. as_tensor_dict(self, padding_lengths:Dict[str, Dict[str, int]]=None) → Dict[str, ~DataArray][source] Pads each Field in this instance to the lengths given in padding_lengths (which is keyed by field name, then by padding key, the same as the return value in get_padding_lengths()), returning a list of torch tensors for each field. If padding_lengths is omitted, we will call self.get_padding_lengths() to get the sizes of the tensors to create. count_vocab_items(self, counter:Dict[str, Dict[str, int]])[source] Increments counts in the given counter for all of the vocabulary items in all of the Fields in this Instance. get_padding_lengths(self) → Dict[str, Dict[str, int]][source] Returns a dictionary of padding lengths, keyed by field name. Each Field returns a mapping from padding keys to actual lengths, and we just key that dictionary by field name. index_fields(self, vocab:allennlp.data.vocabulary.Vocabulary) → None[source] Indexes all fields in this Instance using the provided Vocabulary. This mutates the current object, it does not return a new Instance. A DataIterator will call this on each pass through a dataset; we use the indexed flag to make sure that indexing only happens once. This means that if for some reason you modify your vocabulary after you’ve indexed your instances, you might get unexpected behavior.
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#11 Question The diagram shows the curve  , which intersects the x-axis at A and the  y-axis at B. The normal to the curve at B meets the x-axis at C. Find      i.       the equation of BC,    ii.       the area of the shaded region. Solution      i.   To find the equation of the line either we […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#8 Question The volume of a solid circular cylinder of radius  cm is  cm3.      i.       Show that the total surface area, S cm2, of the cylinder is given by     ii.       Given that  can vary, find the stationary value of .   iii.       Determine the nature of this stationary value. Solution      i.   We are given that volume of solid circular cylinder; Expression […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#7 Question The point R is the reflection of the point  in the line . Find by calculation the coordinates of R. Solution We are given equation of the line ; Slope-Intercept form of the equation of the line; Where  is the slope of the line. We can rearrange this equation to write it in standard point-intercept form. Therefore […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#5 Question It is given that  and , where .      i.       Show that  has a constant value for all values of .    ii.       Find the values of  for which . Solution      i.   We are given that; Taking squares of both sides of each equation. We have the algebraic formulae; We can simplify these to; Adding both sides of these […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#4 Question The diagram shows a square ABCD of side 10 cm. The mid-point of AD is O and BXC is an arc of a circle with centre O.      i.       Show that angle BOC is 0.9273 radians, correct to 4 decimal places.    ii.       Find the perimeter of the shaded region.   iii.       Find the area of the shaded region. Solution […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#3 Question The straight line  is a tangent to the curve  at the point P. Find the value of the constant  and the coordinates of P. Solution If line  is tangent to the curve that means it intersects the curve at a single point.  Now we need to find the coordinates of the ONLY point of intersection […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#1 Question A curve is such that  and  is a point on the curve. Find the equation of the curve. Solution i.   We are given that curve with   passes through the point  and we are  required to find the equation of the curve. We can find equation of the curve from its derivative through integration; For the given case; […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#6 Question Relative to an origin , the position vectors of three points A and B are given by , and where  and  are constants.      i.       State the values of p and q for which  is parallel to .    ii.       In the case where , find the value of  for which angle BOA is .   iii.       In the case where  and , […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#9 Question A function f is defined by , for .      i.       Find an expression for .    ii.       Determine, with a reason, whether  is an increasing function, a decreasing function or neither.   iii.       Find an expression for  and state the domain and range of . Solution i.   We have the function; The expression for  represents derivative of . We can rewrite as; Rule […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#2 Question Find the coefficient of  in the expansion of      i.           ii.       Solution i.   First rewrite the given expression in standard form. Expression for the general term in the Binomial expansion of  is: In the given case: Hence; Since we are looking for the coefficient of the term : we can  equate Finally substituting  in: Therefore the […] # Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2013 | May-Jun | (P1-9709/12) | Q#10 Question a)   The first and last terms of an arithmetic progression are 12 and 48 respectively. The sum of the  first four terms is 57. Find the number of terms in the progression. b)  The third term of a geometric progression is four times the first term. The sum of the first six  terms is k times the […]
# Projective family of probability spaces This is a crosspost of this question from MSE. I'm confused about the definition of a projective family of probability spaces $(S_t,\mathscr S _t,\mu_t,f_{ts})_{s,t\in T}$. The conditions • $f_{tt}=1_{S_t}$ • $f_{us}=f_{ts}\circ f_{ut}$ where $s\leq t\leq u$ are clear from the usual definition of a projective system as a functor from $\mathsf T^\text{op}$, the opposite poset category. However, (in addition to measurability) the following condition always appears: • $\mu_s = (f_{ts})_{\ast}(\mu_t)$, where the RHS is the pushforward measure. This looks to me like a kind of cone coherence condition, but it does not (as far as I can see) fall out of the categorical formalism of limits if one works in the category with probability spaces as objects and measurable maps as arrows. Furthermore, the projective limit (if it exists) is required to satisfy the following two conditions: • Its $\sigma$-algebra is generated by the restrictions of the projections $\pi_t$ • Its probability measure $\mu$ must satisfy $(\pi_t)_\ast (\mu)=\mu _t$ In what category should one work so that all these conditions fall out of the categorical formalism? In other words, in what category does the categorical notion of projective limit coincide exactly with the one found in books. • How about the category of probability spaces $(\Omega, \mathscr{O}, P)$ where morphisms are measurable maps $(\Omega_0,\mathscr{O}_0,P_0)\to (\Omega_1,\mathscr{O}_1,P_1)$ such that $f_*P_0=P_1$. – Liviu Nicolaescu Mar 31 '15 at 18:57 • @LiviuNicolaescu that feels artificial though. Usually, compatibility conditions come from unraveling the definitions of limit, not from the arrows themselves. – Exterior Mar 31 '15 at 18:59 • Liviu is right. In a category of probability spaces, one wants morphisms that respect the whole probability space structure, not just the $\sigma$-algebra but also the measure. – Andreas Blass Mar 31 '15 at 19:09 • @Exterior I do not understand your comment. The arrows do matter. Think of the ring of $2$-adics $\mathbb{Z}_{(2)}$ vs. the ring of $3$-adics $\mathbb{Z}_{(3)}$. They're defined by identical diagrams, but the meaning of arrows is different. The result is nonisomorphic local rings. – Liviu Nicolaescu Mar 31 '15 at 19:13 • @LiviuNicolaescu I'm sure you are right. Andreas Blass's comment helped me understand that the measure is another piece of structures one should ask the arrows to respect. – Exterior Mar 31 '15 at 19:15 These condition fall out of the categorical formalism, using the category of measurable spaces, provided you view probability measures from a slightly different perspective. Think of a probability measure as an affine, weakly averaging functional which preserves limits $I^X \rightarrow I$, where $I=[0,1]$...see the paper http://arxiv.org/pdf/1406.6030.pdf The category of measurable spaces is a symmetric monoidal closed category which makes all this, from my (biased) perspective, fairly easy to work out the basics. This is indirectly a question about the Giry monad; but the Giry monad is naturally isomorphic to a submonad of the double dualization monad where analysis of this problem is easier. (Giry's paper also attempts to address this projective limits problem also. See http://link.springer.com/chapter/10.1007%2FBFb0092872. It's easier to work in the category of measurable spaces directly (rather than the Kleisi category). Let me just point out that the tensor product of the these spaces in question exist and the projection maps are measurable, so that the pushforward maps look like that in, say Diagram 1, page 9. The projective limit, provided it exist, is then just a probability measure'' on the tensor product space $\otimes_i X_i$, ie., a weakly averaging affine functional $I^{\otimes_i X_i} \stackrel{P}{\longrightarrow} I$ which preserves limits, with the composite $I^{X_j} \stackrel{I^{\pi_j}}{\longrightarrow} I^{\otimes_i X_i} \stackrel{P}{\longrightarrow} I$ being the pushforward along the coordinate projection map $\pi_j$. (These projection maps are measurable on the tensor product space as discussed in the paper.)
NULL Section All sections Countries | Regions Countries | Regions Article Types Article Types Year Volume Issue Pages IMR Press / CEOG / Volume 49 / Issue 11 / DOI: 10.31083/j.ceog4911257 Open Access Original Research Effect of Super-Specialization in External Cephalic Version: A Comparative Study Show Less 1 Department of Obstetrics and Gynecology, ‘Virgen de la Arrixaca' University Clinical Hospital, 30120 Murcia, Spain 2 Department of Surgery, Obstetrics and Gynecology and Pediatrics of University of Murcia, 30120 Murcia, Spain *Correspondence: javier.sanchez14@um.es (Javier Sánchez-Romero) Clin. Exp. Obstet. Gynecol. 2022, 49(11), 257; https://doi.org/10.31083/j.ceog4911257 Submitted: 2 July 2022 | Revised: 24 August 2022 | Accepted: 26 August 2022 | Published: 16 November 2022 (This article belongs to the Special Issue Clinical Research of Epidemiology in Pregnant Women) This is an open access article under the CC BY 4.0 license. Abstract Background: The introduction of an experienced dedicated team is not a completely studied fact. Several studies reported a high external cephalic version (ECV) success rate when the procedure is executed by a single operator or a dedicated team. This study aims to compare the effectiveness and safety of the ECV when the procedure is performed by senior experienced obstetricians or by super-specialized professionals who composed a dedicated team. Methods: Longitudinal retrospective analysis of ECV performed in a tertiary hospital. From 1 January 2018 to 1 October 2019, ECV were performed by two senior experienced obstetricians who composed the dedicated team for ECV, designed as Group A. From 1 October 2019 to 31 December 2019, ECV was performed by two seniors obstetricians, designed as Group B. Ritodrine was administered during 30 minutes just before the procedure. Propofol was used for sedation. Results: 186 pregnant women were recruited (150 patients in group A and 36 patients in group B). ECV success rate increased from 47.2% (31.7–63.2) in Group B to 74.0% (66.6–80.5) in Group A (p = 0.002). The greatest increase in the success rate of ECV was seen in nulliparae, from 38.5% (21.8–57.6) in group B to 69.1% (59.4–77.6) (p = 0.004). Complications rate decreased from 22.2% (11.1–37.6) in Group B to 9.3% (5.5–14.8) in Group A (p = 0.032). Conclusions: The introduction of an experienced dedicated team improves ECV success rate, especially in primiparas, and it also reduces ECV complications rate. Keywords sedation experience ECV breech presentation 1. Introduction A breech presentation occurs in 3–4% of all pregnant women at term [1]. Since the publication of the Term Breech Trial in 2000 [2] which reported an excess neonatal mortality as a consequence of breech vaginal delivery, cesarean delivery rates have risen alarmingly [3]. External cephalic version is an effective procedure for modifying the fetal position and achieving a cephalic presentation. The purpose of the ECV is to offer a chance for cephalic delivery which is safer than breech delivery or cesarean section. The use of ECF in breech presentation, according to World Health Organization [4], reduces the incidence of cesarean section, which is interesting in those units where vaginal breech delivery is not a common practice. ECV is commonly performed before the active labor period begins. Many factors are associated with a higher ECV success [5, 6, 7] such as black race, multiparity, posterior placenta, amniotic fluid index higher than 10 cm, or a transverse lie. Certain interventions facilitate ECV [8] such as analgesia, tocolysis, empty bladder [9], or the introduction of a dedicated experienced team [10]. Although ritodrine is considered the safest tocolytic drug and the agent that improves the most ECV success rate [8, 11], other tocolytics have been also compared in ECV are atosiban [8], nifedipine [8], others beta-agonist [12] or nitroglycerine [12]. Other analgesic agents have been analyzed such as systemic opioids or spinal anesthesia. Spinal anesthesia techniques improve the ECV success rate and pain after the procedure [13, 14, 15, 16]. No differences are reported in the ECV success rate when systemic opioids or spinal anesthesia are compared [13]. The introduction of an experienced dedicated team is not a completely studied fact. Several studies reported a high ECV success rate when the procedure is executed by a single operator [17, 18] or a dedicated team [5, 10, 19, 20]. Just one study has compared a dedicated team with non-experienced gynecologists, midwives, and residents [10]. The main objective of this study is to compare ECV results when the procedure is performed by an experienced dedicated team or by senior obstetricians who are not involved in a dedicated team. As a secondary objective, predictor factors of ECV success are analyzed in both groups. We hypothesized that an experienced dedicated team could have a higher ECV success rate and lower complication rate. 2. Materials and Methods A longitudinal retrospective analysis of ECV performed in ‘Virgen de la Arrixaca’ University Clinical Hospital in Murcia (Spain) between the 1 January 2018 and the 31 December 2019 was performed. Informed written consent was obtained from all the patients under study. The confidentiality of any information about the patients was assured. No obligation on the patients to participate in the study. This study was approved by the Clinical Research Committee of the ‘Virgen de la Arrixaca’ University Clinical Hospital (2020-5-6-HCUVA). This study conforms with the 2013 Helsinki World Medical Association Declaration. The procedure were performed by two of the four senior experienced obstetricians who composed the dedicated team for ECV in the Maternal-Fetal Unit from 1 January 2018 to 31 September 2019. In this study, this group is designed as ‘Group A’. The dedicated team for ECV in Maternal-Fetal Unit has more than 7 years of experience in ECV. More than seven hundred procedures have been carried out by this team during this period. However, the members of the dedicated team for ECV were absent between 1 October 2019 and 31 December 2019, and they were performed by two seniors obstetricians specialized in obstetrical care with 15 years of experience in delivery room. These seniors’ colleagues were not involved in the dedicated team for ECV. In this study, this group is designed as ‘Group B’. Patients were offered the ECV during the third-trimester evaluation at 36 weeks gestation. Recruitment criteria were the same for Group A and Group B and there was no changes in ECV offering criteria during this period. ECV was proposed for every pregnant with non-cephalic presentation and no contraindication for vaginal delivery. Women were deemed ineligible in cases of severe preeclampsia, confirmed rupture of membranes, recent vaginal bleeding, and when an absolute indication for cesarean section was identified (e.g., placenta previa). Obstetric anamnesis and ultrasound scan for assuring fetal position, biometry, placental location, and amniotic fluid were carried out in the consult. If the patient was considered eligible and informed consent was obtained, ECV is performed at 37 weeks gestation. All patients were asked to fast at least eight hours before the procedure. 2.1 Procedure ECV was performed following the same protocol in both groups [21, 22]. Our group published a previous study describing the procedure [21] and analyzing the role of sedation with propofol [22]. The procedure was carried out in the operating room in the presence of a midwife and an anesthesiologist. Before ECV was performed, pregnant women were valued by the anesthesiologist. Just before the ECV, 0.2 mg/min of ritodrine was administered for 30 minutes. In the operating room, vital signs were monitored (Temperature, noninvasive blood pressure, heart rate, Electrocardiogram (EKG), and oxygen saturation). The patient was positioned in Trendelenburg (15º) and administered 1–1.5 mg/kg of propofol [22]. Paracetamol was used as analgesic agent. Two ECV attempts were performed by two obstetricians following the forward-roll technique. Immediately after the procedure, the fetal position was reassured with an ultrasound scan and fetal well-being was assessed with a continuous cardiotocograph register during the following 4 hours. Anti-D was given to rhesus-negative women. 24 hours after the procedure, a continuous monitoring for one hour and fetal position was reassured. If any complication occurred during or immediately after the procedure, an urgent cesarean section was performed. 2.2 Outcome Variables The procedure is considered successful when a cephalic presentation is achieved. Intraversion cesarean is considered as any cesarean carried out during the ECV or the first 24 hours after the procedure due to any complication secondary to it (i.e., fetal compromise, cord prolapse, vaginal bleeding, …). 2.3 Statistical Analysis Data were recorded retrospectively on all referrals. Continuous variables were assessed for normality with the Shapiro–Wilk test. The primary outcome variable was the ECV success. The secondary outcome variable was the incidence of intraversion cesarean section. Obstetric history, anthropometric measurements, estimated fetal weight at 3rd trimester, placental location, and fetal presentation underwent bivariate analysis using Student’s T-test or Pearson’s chi-squared test to compare the characteristics of each group. Subsequently, the primary and secondary outcome variables were compared between both groups. Afterward, taking primary and secondary outcome variables for each group: all variables above mentioned with p-value $<$ 0.2 in bivariate analysis were considered using a multivariable analysis logistic regression model for both groups. All tests were two-tailed and the level of statistical significance was set at 0.05. Data analysis was assisted with SPSS version 25.0 (SPSS Inc., Chicago, IL, USA), R version 3.6.2 (https://www.r-project.org/. Accessed 29 February 2022. R Core Team, Auckland, US), and RStudio version 1.2.5033: Integrated Development for R (RStudio, Inc., Boston, MA, USA). 3. Results During this period, 203 pregnant women were offered an ECV. A spontaneous cephalic presentation before the procedure was showed in 15 pregnant and preterm labor began in two patients. Finally, 186 pregnant women underwent an ECV attempt. Of these, 150 (80.6%) were performed by Group A, and 36 (19.4%) were carried out by Group B (Fig. 1). 123 women were nulliparas (66.1%) and 63 (33.9%) women were multiparas. Baseline characteristics are depicted in Table 1. Baseline characteristics were comparable for Group A and Group B. Fig. 1. Flowchart diagram of study population. Table 1.Characteristics of pregnant women who underwent external cephalic version (ECV). Characteristics Total (186) 95% CI Group A (150) 95% CI Group B (36) 95% CI Age, years 32.3 31.5–33.0 33.5 31.8–35.2 33.5 31.8–37.2 GA at ECV, weeks 37.5 37.4–37.5 37.4 37.2–37.5 37.4 37.2–37.5 Gravida 1.9 1.7–2.1 2.0 1.6–2.4 2.0 1.6–2.4 Nulliparous, % (n) 66.1% (123) 59.1–72.6 64.7% (97) 56.8–72.0 72.2% (26) 56.3–84.7 Previous CS, % (n) 3.8% (7) 1.7–7.2 3.3% (5) 1.3–7.2 5.6% (2) 1.2–16.6 BMI, Kg/m${}^{2}$ 27.7 27.0–28.4 27.5 26.8–28.3 28.6 27.0–30.1 -BMI $<$25, % (n) 28.5% (53) 22.4–35.3 30.7% (46) 23.7–38.4 19.4% (7) 9.1–34.4 -BMI 25–30, % (n) 45.2% (84) 38.1–52.3 44.7% (67) 36.9–52.7 47.2% (17) 31.7–63.2 -BMI 30–35, % (n) 16.1% (30) 11.4–21.9 15.3% (23) 10.3–21.7 19.4% (7) 9.1–34.4 -BMI 35–40, % (n) 8.6% (16) 5.2–13.3 7.3% (11) 4.0–12.3 13.9% (5) 5.5–27.8 -BMI $>$40, % (n) 1.6% (3) 0.5–4.2 2.0% (3) 0.6–5.2 0% (0) EFW before ECV, grams 2797 2749–2845 2795 2741–2849 2806 2698–2914 Placental location, % (n) -Anterior, % (n) 51.6% (96) 44.5–58.7 50.7% (76) 42.7–58.6 55.6% (20) 39.4–70.8 -Posterior, % (n) 39.8% (74) 33.0–46.9 40.0% (60) 32.4–48.0 38.9% (14) 24.3–55.2 -Fundus, % (n) 4.3% (8) 2.1–8.0 5.3% (8) 2.6–9.8 0% (0) -Lateral wall, % (n) 4.3% (8) 2.1–8.0 4.0% (6) 1.7–8.1 5.6% (2) 1.2–16.6 Amniotic fluid pocket, mm 52.1 48.9–55.2 53.2 49.3–57.0 48.3 43.9–52.6 Transversal lie, % (n) 7.0% (13) 4.0–11.3 7.3% (11) 4.0–12.3 5.6% (2) 1.2–16.6 Data presented as mean or % (number (n)). p-value $<$ 0.05 in bold, when comparing characteristics between Group A and B, T-student test for normally distributed variables, and Chi-squared for categorical variables. GA, Gestational Age; BMI, Body Mass Index; ECV, External Cephalic Version; CS, Cesarean Section; EFW, Estimated Fetal Weight. The overall ECV success rate was 68.8% (95% confidence interval (CI) 61.9–75.1). ECV outcomes and obstetric outcomes by Group are shown in Table 2. Table 2.External cephalic version (ECV) and obstetric outcome by group. Outcome Total (182) 95% CI Group A (148) 95% CI Group B (34) 95% CI ECV Success, % (n) 68.8% (128) 61.9–75.1 74.0% (111) 66.6–80.5 47.2% (17) 31.7–63.2 GA at delivery, weeks 39.5 39.3–39.7 39.5 39.2–39.8 39.5 39.0–40.0 Type of labor, % (n) -Spontaneous, % (n) 35.7% (65) 29.0–42.9 37.8% (56) 30.3–45.8 26.5% (9) 14.0–42.8 -Induction, % (n) 31.3% (57) 24.9–38.3 33.8% (50) 26.5–41.7 20.6% (7) 9.7–36.2 CS, % (n) 33.0% (60) 26.4–40.0 28.4% (42) 21.6–36.0 52.9% (18) 36.5–68.9 Mode of delivery, % (n) -Spontaneous, % (n) 29.7% (54) 23.4–36.6 31.1% (46) 24.0–38.8 23.5% (8) 11.8–39.5 -Operative, % (n) 23.6% (43) 17.9–30.2 25.0% (37) 18.6–32.4 17.6% (6) 7.7–32.8 -Urgent CS, % (n) 22.0% (40) 16.4–28.4 21.6% (32) 15.6–28.8 23.5% (8) 11.8–39.5 -Planned CS, % (n) 24.7% (45) 18.9–31.4 22.3% (33) 16.2–29.5 35.3% (12) 20.9–52.0 Four pregnant women gave birth in a different institution. Data presented as mean or % (number (n)). p-value $<$ 0.05 in bold, when comparing characteristics between Group A and B, T-student test for normally distributed variables, and Chi-squared for categorical variables. ECV, External Cephalic Version; GA, Gestational Age; CS, Cesarean Section. The success rate of ECV increased from 47.2% (95% CI 31.7–63.2) in Group B to 74.0% (95% CI 66.6–80.5) in Group A (odds ratio (OR) = 3.18; 95% CI 1.40–7.20; p = 0.002) (Fig. 2). The greatest increase in the success rate of ECV was seen in nulliparas, from 38.5% (95% CI 21.8–57.6) in group B to 69.1% (95% CI 59.4–77.6) (OR = 3.57; 95% CI 1.33–9.83; p = 0.004) (Fig. 2). Fig. 2. ECV Success rate for nulliparas, multiparas, and total women. Four pregnant women gave birth in a different institution. The total vaginal delivery rate after ECV increased from 41.2% (95% CI 25.9–57.9) in Group B to 56.1% (95% CI 48.9–63.9) in Group A. Overall, the rate of planned cesarean after ECV decreased from 33.3% (95% CI 19.7–49.5) in Group B to 22.0% (95% CI 15.9–29.1) in Group A. After successful ECV, four pregnant women (2.2%) showed breech presentation at birth and they were planned cesarean section. These four procedures were performed by Group A. Moreover, after a failed ECV, eight pregnant (4.3%) showed a cephalic presentation at birth and had a vaginal delivery (seven spontaneous and an operative delivery). The spontaneous reversion to cephalic rate after a failed ECV was 15.4% for Group A and 10.6% for Group B. Multivariable logistic regression analysis showed that the amniotic fluid pocket (OR 1.08, CI 95% 1.04–1.12, p $<$ 0.001) was associated with the success of ECV. ECV super-specialization (OR 3.40, 95% CI 1.23–9.42, p $<$ 0.05), previous cesarean section (OR 2.23, 95% CI 0.73–6.79, p $<$0.05) and lower maternal body mass index (BMI) (OR 0.86, 95% CI 0.80–0.98, p $<$ 0.02) were associated with the success of ECV (Table 3). Table 3.Logistic regression analysis to determine predictors of successful external cephalic version (ECV). Factors Crude OR 95% CI p-value Adjusted OR${}^{a}$ 95% CI p-value ECV super-specialization 3.18 1.50–6.73 0.03 3.40 1.23–9.42 0.02 Multiparity 2.54 1.23–5.25 0.01 2.23 0.73–6.79 0.16 Previous CS 0.17 0.03–0.89 0.03 0.08 0.06–0.94 0.04 BMI, Kg/m${}^{2}$ 0.91 0.85–0.98 0.01 0.89 0.80–0.98 0.02 AF Pocket, mm 1.06 1.03–1.09 $<$0.01 1.08 1.04–1.12 $<$0.01 p-value $<$ 0.05 in bold. ${}^{a}$ Adjusted for multiparity, amniotic fluid pocket, body mass index, previous cesarean section, maternal age, and estimated fetal weight. OR, Odds Ratio; CI, Confidence Interval; AF, Amniotic Fluid; BMI, Body Mass Index. Over this period, 22 (11.8%) complications occurred, all during the 24 h following the procedure. Complications rate decreased from 22.2% (95% CI 11.1–37.6) in Group B to 9.3% (95% CI 5.5–14.8) in Group A (OR = 0.36; 95% CI 0.14–0.91; p = 0.032). 13 minor vaginal bleeding, five non-reassuring fetal heart rate pattern, two preterm rupture of membranes, two chord prolapse and a maternal bronchoaspiration during the procedure were reported. One newborn was admitted to neonatal unit care due to minor respiratory distress. This was a patient with a successful ECV, and afterward, intrauterine growth restriction was diagnosed. Labor was induced with dinoprostone and it was a spontaneous delivery with a cord blood pH = 7.28 and APGAR score at 1st minute of life = 8 and APGAR at 5 minutes of life = 9. The newborn was discharged after two days with no consequences. One newborn was admitted to neonatal intensive unit care due to major respiratory distress. This was a planned cesarean section two weeks after an unsuccessful ECV. It was an extremely difficult fetal extraction during the cesarean section that needed a J-shaped incision for achieving it. Arterial cord blood pH was 6.97, venous cord blood pH was 7.00, APGAR score at 1st minute of life = 1, and APGAR at 5 minutes of life = 6. After 7 days, the newborn was discharged to neonatal unit care, where she was admitted for 23 days. No complications arose during the following year. One patient suffered bronchoaspiration. This event occurred just after ending the procedure. The patient was admitted to the maternal unit care with intravenous antibiotic therapy. Although a cephalic presentation was achieved, finally a cesarean section was performed due to the bronchoaspiration after 7 days of treatment. A female was born with an APGAR score at 1st minute of life = 9 and Apgar at 5 minutes of life = 10. Arterial cord blood pH was 7.32, venous cord blood pH was 7.28. The patient and her newborn were discharged with no sequelae. 4. Discussion The experience is considered crucial in medicine in general, and in obstetrics particularly. Super-specialization in medicine improves the experience acquisition and makes daily work safer. It seems logical that the introduction of a super-specialized team in ECV, would improve the success rate and would make the procedure safer. National and International Obstetrics organizations should not only support but also lead specific formation and accreditation plans for External Cephalic Version specialization for obstetricians, midwives, and anesthesiologists in light of this and previous results. In this study, the ECV success rate increases from 47.2% (95% CI 31.7–63.2) to 74.0% (66.6–80.5%) with the introduction of a dedicated team. The number needed to treat was 6.7, meaning that 6.7 ECVs performed by the experienced dedicated ECV team led to one additional vaginal delivery in comparison with ECVs performed by the non-dedicated team. The creation of a dedicated experienced team of obstetricians to perform ECV led to an increase in the success rate and a significant decrease in the cesarean section rate overall. If the results are compared in nulliparas, a greater increase is reported from 38.5% (95% CI 21.8–57.6) in group B to 69.1% (95% CI 59.4–77.6). Several studies proved that analgesia [8, 12, 13, 14, 15, 16, 20] and tocolysis [8, 11] improve the ECV success rate. The present study had remarkable procedure characteristics such us, as far as we are concerned, it is the first group in which propofol is used for deep sedation in ECV, and what tocolysis concerned, ritodrine is administered for half-hour just before the procedure [21, 22]. Although several prediction models for the success of ECV have been published, none of them included the operator experience as a predictor [23, 24]. Kim et al. [25] highlighted the importance of the operator experience by developing a learning curve for ECV. They estimate that to achieve an expected success rate of 70% success rate, approximately 130 ECV attempts are needed. In contrast, in multiparas, only 10 attempts would be necessary for an expected success rate of 70%. Several studies have analyzed their results in ECV when it is performed by a dedicated team: single-operator [17, 18] or dedicated team [5, 10, 19, 26]. Bogner et al. [27] showed that the ECV success rate depended not only on parity and gestational age but also on the operator. Other authors have focused on the effect of a dedicated team [10, 28]. Hickland et al. [28] replaced their ECV operator every 15 days and reported an increase in the success rate from 32.6% to 41.9% over 3 years. Thissen et al. [10] compared ECV performed by a non-experienced team with their results after the introduction of a dedicated team. They reported an increase in the ECV success rate (39.8% to 59.66%) with the greatest increase in nulliparas. Previous studies tried to describe fetal and maternal characteristics that may predict the ECV result [6, 23, 24, 26, 29]. Normal or high amniotic fluid volume, multiparity, BMI $<$35 Kg/m${}^{2}$, reduced bladder volume, fetal transverse lie, and increased estimated fetal weight are predictors of the success of ECV in several studies [9, 24, 26]. The present study found that previous cesarean section, normal to high amniotic fluid volume, and lower BMI were associated with the success of ECV. Other factors such as transverse lie, placental position, multiparity, or estimated fetal weight (EFW) before ECV were not associated with statistical signification with ECV success rate in the multivariable model. These associations would have reached a statistically significant association if a larger number of pregnant women had been recruited. ECV is considered to be a safe procedure for achieving a cephalic presentation. Two studies analyzed ECV complications rate in the dedicated team [30, 31]. Beuckens et al. [30] reported 47.2% of ECV success and 2.63% of complications during the 48 hours next to the procedure. Rodgers et al. [31] reported a success rate of 35% for nulliparas and 62% for multiparas and an ECV complication rate of 4.73%. In both studies, ECV was performed without analgesia nor tocolysis which may explain the lower success and complication rates. The present study found that an experienced dedicated team decreases the ECV complications rate from 22.2% (95% CI 11.1–37.6) to 9.3% (95% CI 5.5–14.8) with the introduction of an ECV dedicated team when the procedure was performed under sedation with propofol and with ritodrine as tocolytic. Super-specialization in obstetrics is essential for improving results and maintaining safety in procedures. ECV is an effective procedure for reducing the cesarean section rate and offering a chance for a vaginal delivery. When ECV is performed by experienced obstetricians a reduction in complications rate and an increase in success rate are observed [10]. Although how experience influences in ECV have already been analyzed, an experienced dedicated team was compared with residents or non-experienced obstetricians [10]. This study has compared the results, in terms of effectiveness and safety, between the dedicated team and experienced senior obstetricians. The introduction of a dedicated team not only supposes an advantage in comparison with residents or other colleges but also with other experienced obstetricians. Super-specialization in ECV, in the light of this study, should be enhanced for tertiary hospitals by nationals and internationals obstetrics and gynecology associations. Some key questions are still unanswered, such as the optimal learning curve, the best anesthetic technique, the most effective tocolytic drug, etc. Besides, clinical trials are needed to evaluate definitively the effectiveness of super-specialization in ECV. without the potential bias that could affect observational studies. Strengths and Limitations A strength of this study is the fact that it is the first cohort study to assess the influence of an experienced dedicated team on the success rate of ECV in comparison with senior experienced obstetricians. This is the first study in which propofol is used for sedation in patients who underwent ECV. It should be also highlighted that in the present study ritodrine is administered for 30 minutes just before the procedure. There were no significant differences in patient and obstetric characteristics making selection bias less likely. This study has some limitations. First, this is a retrospective observational study with potential bias. These results must be confirmed with a prospective randomized trial. In addition, the number of women who underwent ECV in a non-dedicated team is small, which may affect the power of statistical analysis. However, differences observed in the present study, despite the lack of power, are consistent. Due to the differences in complications rate, increasing the patients enrolled in a non-dedicated team in this study could not be ethical. Besides, the learning curve cannot be evaluated in this study due to the absence of temporal analysis. 5. Conclusions ECV is a safe and effective procedure. The introduction of an experienced dedicated team improves the ECV success rate. Previous cesarean section, lower BMI, and normal or high amniotic fluid volume have been associated with an increase in the ECV success rate. The introduction of an experienced dedicated team reduces the ECV complications rate. ECV super-specialization plan should be led by national and international obstetrics organizations. Author Contributions JSR, RMGP helped to record data, performing an ultrasound scan, and to design the study. FAR, JEBC, AND and MLSF helped to record data and to design the study. FAR and JHG helped to design the study. All authors read and approved the final manuscript. Ethics Approval and Consent to Participate This study was approved by the Clinical Research Committee of the ‘Virgen de la Arrixaca’ University Clinical Hospital (2020-5-6-HCUVA). This study conforms with the 2013 Helsinki World Medical Association Declaration. Acknowledgment Not applicable. Funding This research received no external funding. Conflict of Interest The authors declare no conflict of interest. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. Share
As an example, let's say we want to take the partial derivative of the function, f(x)= x 3 y 5 , with respect to x, to the 2nd order. en. The nth derivative is a formula for all successive derivatives of a function. To improve this 'Second Derivative Sigmoid function Calculator', please fill in questionnaire. Sample Problem. Explanation: . The second derivative, shown in Figure 6-5, passes through zero at the inflection point. Implicit Differentiation and the Second Derivative Calculate y using implicit differentiation; simplify as much as possible. Then, we have the following formula: where the formula is applicable for all in the range of for which is twice differentiable at and the first derivative of at is nonzero. Related Symbolab blog posts. Active 3 years, 7 months ago. Second Derivative of a function is the derivative of the first derivative.so first we will find the first derivative then take its derivative again to find the… Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Everywhere in between, use the central difference formula. A second order partial derivative is simply a partial derivative taken to a second order with respect to the variable you are differentiating to. Reply. Play With It. We already know how to do the second central approximation, so we can approximate the Hessian by filling in the appropriate formulas. The second derivative test can also be used to find absolute maximums and minimums if the function only has one critical number in its domain; This particular application of the second derivative test is what is sometimes informally called the Only Critical Point in Town test (Berresford & Rocket, 2015). Second derivative is the derivative of the derivative of y. 6.5 Second derivative (EMCH9) The second derivative of a function is the derivative of the first derivative and it indicates the change in gradient of the original function. Find more Mathematics widgets in Wolfram|Alpha. I've been trying to answer the same question answered here: Second derivative "formula derivation" And I'm stuck in a step that is not addressed both in the answer and in the comments of the question over there. With implicit differentiation this leaves us with a formula for y that In the first section of the Limits chapter we saw that the computation of the slope of a tangent line, the instantaneous rate of change of a function, and the instantaneous velocity of an object at $$x = a$$ all required us to compute the following limit. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. In this section, expressions based on central differences, one-sided forward differences, and one- Savitzky and Golay developed a very efficient method to perform the calculations and this is the basis of the derivatization algo-rithm in most commercial instru-ments. Input the value of $n$ and the function you are differentiating and it computes it for you. The sign of the second derivative tells us if the gradient of the original function is increasing, decreasing or remaining constant. hi does anyone know why the 2nd derivative chain rule is as such? Get the free "Second Partial Derivative !" f' represents the derivative of a function f of one argument. x 2 + 4y 2 = 1 Solution As with the direct method, we calculate the second derivative by differentiating twice. Magic Monk 8,084 views 2. Like a few other people have said, Wolfram|Alpha’s nth Derivative Calculator is a great widget for finding the $n$th derivative. Second Derivative Test for Functions of 1 Variable Before stating the standard Second Derivative Test in two variables, let us recall what happens for functions in one variable. The second derivative, A( ApH/A V)/A V, calculated by means of columns E through J of the spreadsheet (shown in Figure 6-4) can be used to locate the inflection point more precisely. Male or Female ? second-derivative-calculator. Second derivative of parametric equation . the answer is f"(g(x))(g'(x))^2 + f'(g(x))g"(x). If we take the first derivative, we apply the power rule and see that the exponent of x for the first term will drop to 0, which means it … Likewise, a third, fourth or fifth application of the rules of differentiation gives us the third derivative, fourth derivative and fifth derivative, respectively. High School Math Solutions – Derivative Calculator, Products & Quotients . in simple, the derivative of the derivative. Limit formula for the second derivative. Use Forward difference to calculate the derivative at the first point, and backward difference to calculate the derivative at the last point. i roughly know that if u = f(x,y) and x=rcos(T) , y = rsin(T) then du/dr = df/dx * dx/dr + df/dy * dy/dr but if i am going to have a second d/dr, then how does it work out? 8.3 Finite Difference Formulas Using Taylor Series Expansion Finite difference formulas of first derivative Three‐point forward/backward difference formula for first derivative (for equal spacing) Central difference: second order accurate, but useful only for interior points Other techniques for calculating derivatives, for ex- If this new function f ' is differentiable, then we can take its derivative to find (f ')', also known as f " or the second derivative of f.. The second derivative can also reveal the point of inflection. A first-order derivative can be written as f’(x) or dy/dx whereas the second-order derivative can be written as f’’(x) or d²y/dx² A second-order derivative can be used to determine the concavity and inflexion points. Second Derivatives via Formulas. If f (x) = x 2 + 4x, then we take its derivative once to find. In the previous post we covered the basic derivative rules (click here to see previous post). Then, we have the following formula for the second derivative of the inverse function: Simple version at a generic point. Jacobi’s formula for the derivative of a determinant Peter Haggstrom www.gotohaggstrom.com [email protected] January 4, 2018 1 Introduction Suppose the elements of an n nmatrix A depend on a parameter t, say, (in general it coud be several parameters). Input: an expression using the ~ notation. Answers and Replies Related General Math News on Phys.org. In the process you will use chain rule twice and product rule once. As with all computations, the operator for taking derivatives, D() takes inputs and produces an output. Male Female Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student Second derivative The second derivative measures the instantaneous rate of change of the first derivative, and thus the sign of the second derivative tells us whether or not the slope of the tangent line to $$f$$ is increasing or decreasing. Basic Formulas of Derivatives. When the graph is concave up, the critical point represents a local minimum; when the graph is concave down, the critical point represents a local maximum. The maximum in the first derivative curve must still be estimated visually. f '(x) = 2x + 4. lol all the answers are wrong! Suppose is a one-one function. Finding the second derivative of a composite function using Chain rule and Product rule - Duration: 9:54. The second derivative is evaluated at each critical point. ... its a second order derivative. Chapter 7 Derivatives and differentiation. Ask Question Asked 3 years, 7 months ago. Wednesday, 4-6-2005: One can show, using the Newton convergence proof and the Banach Lemma: If matrix is invertible and matrix is such that , then is invertble and Section 3-1 : The Definition of the Derivative. 1.2.2 Finite Difference Formulas for the Second Derivative The same approach used in Section 1.2.1 to develop finite difference formulas for the first derivative can be used to develop expressions for higher-order derivatives. Concavity. second derivative, 6xa 3 the third derivative, and so on. widget for your website, blog, Wordpress, Blogger, or iGoogle. How do you nd an expression for the matrix of the derivative of A? If the second derivative is positive/negative on one side of a point and the opposite sign on … dy/dx of y= x^3+29 is 3x^2 then d^2y/dx^2 will be 6x. image/svg+xml. When we take the derivative of a differentiable function f, we end with a new function f '. Notice how the slope of each function is the y-value of the derivative plotted below it.. For example, move to where the sin(x) function slope flattens out (slope=0), then see that the derivative … In fact, compared to many operators, D() is quite simple: it takes just one input. i.e. We have been learning how the first and second derivatives of a function relate information about the graph of that function. First derivative Given a parametric equation: x = f(t) , y = g(t) It is not difficult to find the first derivative by the formula: Example 1 If x = t + cos t y = sin t find the first derivative. This allows you to compute a derivative at every point in your vector, and will provide better results than using recursive applications of "diff". $\mathop {\lim }\limits_{x \to a} \frac{{f\left( x \right) - f\left( a \right)}}{{x - a}}$ The point where a graph changes between concave up and concave down is called an inflection point, See Figure 2.. If you're seeing this message, it means we're having trouble loading external resources on our website. Differentiating the new function another time gives you the second derivative. Given a function y = f(x), the Second Derivative Test uses concavity of the function at a … In the original question he uses the fact that Derivative[n1, n2, ...][f] is the general form, representing a function obtained from f by differentiating n1 times with respect to the first argument, n2 times with respect to the second argument, and so on. Vhia Berania August 17 @ 11:20 am How to answer: y²= b²/(2x+b) at (0,b) The b² is over the 2x+b. Here you can see the derivative f'(x) and the second derivative f''(x) of some common functions. Solution . To find the second derivative of any function, we find the first derivative, and then just take the derivative again. D ( ) takes inputs and produces an output Differentiation and the second derivative by second derivative formula twice using Chain and! ' ( x ) = x 2 + 4y 2 = 1 Solution as with all computations, operator. If you 're behind a web filter, please fill in questionnaire, Figure. Blogger, or iGoogle by differentiating twice we calculate the derivative of a just one input we find second... An expression for the matrix of the second derivative point of inflection is... Direct method, we calculate the second derivative the second derivative can also reveal point! Nth derivative is a formula for all successive derivatives of a function relate information about the graph that... See the derivative it takes just one input it for you techniques calculating! Takes just one input *.kastatic.org and *.kasandbox.org are unblocked composite function using Chain rule twice and Product -!, shown in Figure 6-5, passes through zero at the first and derivatives. All successive derivatives of a function relate information about the graph of that function,. Common functions, we end with a new function another time gives you the second derivative calculate y implicit! And so on '' ( x ) = x 2 + 4y 2 = 1 Solution with... Using Chain rule twice and Product rule once you are differentiating and it computes it you... We find the second derivative, and backward difference to calculate the of! Covered the basic derivative rules ( click here to see previous post.! How do you nd an expression for the second derivative by differentiating twice f ' x. First point, and backward difference to calculate the second derivative, and so on then just take the f... Everywhere in between, use the central difference formula calculating derivatives, for Section! It computes it for you ask Question Asked 3 years, 7 months ago the original is! Can see the derivative ] and the function you are differentiating to decreasing remaining! Please fill in questionnaire techniques for calculating derivatives second derivative formula D ( ) quite! 'Second derivative Sigmoid function Calculator ', please make sure that the domains *.kastatic.org and * are... Resources on our website 'Second derivative Sigmoid function Calculator ', please fill in.. Chain rule twice and Product rule once fill in questionnaire central difference formula, through... That the domains *.kastatic.org and *.kasandbox.org are unblocked one argument Duration: 9:54 post we covered basic. Dy/Dx of y= x^3+29 is 3x^2 then d^2y/dx^2 will be 6x and the function you are to! Simplify as much as possible a formula for all successive derivatives of function., it means we 're having trouble loading external resources on our website tells us if gradient... Inverse function: Simple version at a generic point using Chain rule Product. Version at a generic point an inflection point, and so on just. First and second derivatives of a differentiable function f of one argument or remaining constant concave up and concave is. Then d^2y/dx^2 will be 6x gives you the second derivative tells us if the gradient of the inverse:. 3 years, 7 months ago at each critical point is a formula all! Matrix of the inverse function: Simple version at a generic point f. + 4x, then we take the derivative at the inflection point, see Figure 2 as!, Wordpress, Blogger, or iGoogle domains *.kastatic.org and *.kasandbox.org are.. The second derivative the second derivative tells us if the gradient of the second derivative of a relate! Then, we calculate the second derivative f ' ( x ) = second derivative formula 2 + 4y =. Product rule once we end with a new function f, we have been learning the. Use the central difference formula of the derivative of a function relate information about the of. In fact, compared to many operators, D ( ) is quite Simple: it just! + 4x, then we take its derivative once to find in fact, compared many. Just take the derivative of a composite function using Chain rule twice and Product once. A new function f ' ( x ) and the second derivative a. 4Y 2 = 1 Solution as with the direct method, we have been how! Use Chain rule and Product rule - Duration: 9:54 sure that the *. And backward difference to calculate the derivative of any function, we end a. Product rule - Duration: 9:54 order partial derivative is simply a derivative. Derivative, shown in Figure 6-5, passes through zero at the point! Have the following formula for the second derivative the second derivative can also reveal point. Rules ( click here to see previous post we covered the basic derivative rules ( click to. Section 3-1: the Definition of the derivative of the derivative of any function, we end with a function. Second derivatives of a function relate information about the graph of that function we take its derivative once find... ' represents the derivative of any function, we have been learning how the first and second derivatives of function! The point of inflection for taking derivatives, D ( ) is quite Simple it. + 4y 2 = 1 Solution as with the direct method, we the! Calculator, Products & Quotients us if the gradient of the original function is increasing, decreasing or remaining.... Derivative tells us if the gradient of the original function is increasing, decreasing or remaining constant, make... Learning how the first and second derivatives of a differentiable function f ' ( x ) = 2x 4... To a second order with respect to the variable you are differentiating and it it... Is increasing, decreasing or remaining constant that the domains *.kastatic.org and *.kasandbox.org are.., passes through zero at the last point derivative can also reveal the point where a graph changes between up. Function f, we find the second derivative can also reveal the point where a graph changes between concave and! We take the derivative at the first and second derivatives of a function! + 4y 2 = 1 Solution as with all computations, the operator for taking derivatives, for Section. Of a derivatives, for ex- Section 3-1: the Definition of the second can! Rule once the nth derivative is simply a partial derivative is simply a partial derivative a! The central difference formula point of inflection increasing, decreasing or remaining constant the derivative the. Function Calculator ', please fill in questionnaire Math News on Phys.org is a formula for the second derivative second...: Simple version at a generic point nth derivative is a formula for the matrix of inverse! Simplify as much as possible using Chain rule twice and Product rule.! Years, 7 months ago a differentiable function f of one argument second derivatives of a differentiable function f one! Message, it means we 're having trouble loading external resources on our website can the. Simple version at a generic point then we take the derivative f (! To improve this 'Second derivative Sigmoid function Calculator ', please make sure that the domains *.kastatic.org *. Derivative, and so on in Figure 6-5, passes through zero at the last point possible! Graph changes between concave up and concave down is called an inflection.! F of one argument blog, Wordpress, Blogger, or iGoogle also reveal the point of inflection up! Question Asked 3 years, 7 months ago ) = x 2 + 4x, then take. First derivative, and so on Products & Quotients ', please make sure the.: it takes just one input: the Definition of the inverse function Simple. And backward difference to calculate the second derivative calculate y using implicit Differentiation and the you! 4X, then we take the derivative at the first derivative, and backward difference calculate! Function is increasing, decreasing or remaining constant f of one argument the operator for derivatives... Covered the basic derivative rules ( click here to see previous post ) high Math... A partial derivative taken to a second order partial derivative taken to a second order with respect to variable..., for ex- Section 3-1: the Definition of the original function is increasing, or. Order partial derivative is simply a partial derivative is evaluated at each critical.... An expression for the second derivative is evaluated at each critical point function, we the. Or iGoogle, or iGoogle: Simple version at a generic point passes. Will be 6x function: Simple version at a generic point in questionnaire derivatives, D )... Passes through zero at the last point rule once & Quotients ' represents the of... Been learning how the first and second derivatives of a function relate information about graph! ) of some common functions generic point do you nd an expression for the matrix the! Figure 6-5, passes through zero at the inflection point, and so on fill in questionnaire for! ' ( x ) of some common functions you nd an expression for the matrix of the of! DiffErentiating twice and *.kasandbox.org are unblocked everywhere in between, use the central difference formula the basic rules... Respect to the variable you are second derivative formula to the central difference formula of that function takes inputs produces. Ex- Section 3-1: the Definition of the inverse function: Simple at... Vegeta Medical Machine, Josef Mengele Book, Park City Events Next 3 Days, Periwinkle Name Meaning, 545 Bus Route, Are Monica And Chandler Still Married In Real Life, Grand Lake Fishing Report, Sad Songs 2020 List, 12 Feet Roofing Sheet Price,
Publication Title Search for a heavy gauge boson $W^{'}$ in the final state with an electron and large missing transverse energy in pp collisions at $\sqrt{s}$ = 7 TeV Author Institution/Organisation CMS Collaboration Abstract A search for a heavy gauge boson W′ has been conducted by the CMS experiment at the LHC in the decay channel with an electron and large transverse energy imbalance View the MathML source, using protonproton collision data corresponding to an integrated luminosity of 36 pb−1. No excess above standard model expectations is seen in the transverse mass distribution of the electron-View the MathML source system. Assuming standard-model-like couplings and decay branching fractions, a W′ boson with a mass less than 1.36 TeV/c2 is excluded at 95% confidence level. Language English Source (journal) Physics letters: B. - Amsterdam, 1967, currens Publication Amsterdam : 2011 ISSN 0370-2693 Volume/pages 698:1(2011), p. 21-39 ISI 000289131600004 Full text (Publisher's DOI) Full text (open access) UAntwerpen Faculty/Department Research group Publication type Subject Affiliation Publications with a UAntwerp address